Dec 11 16:00:50 crc systemd[1]: Starting Kubernetes Kubelet... Dec 11 16:00:50 crc kubenswrapper[5120]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 16:00:50 crc kubenswrapper[5120]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 11 16:00:50 crc kubenswrapper[5120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 16:00:50 crc kubenswrapper[5120]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 16:00:50 crc kubenswrapper[5120]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 11 16:00:50 crc kubenswrapper[5120]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.835991 5120 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838633 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838657 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838662 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838666 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838669 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838673 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838676 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838680 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838684 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838687 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838690 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838693 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838697 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838700 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838703 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838706 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838709 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838712 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838716 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838719 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838722 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838737 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838741 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838744 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838747 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838750 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838753 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838756 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838760 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838763 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838766 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838769 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838772 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838776 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838779 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838782 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838785 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838788 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838791 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838796 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838800 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838803 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838806 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838810 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838814 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838817 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838821 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838824 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838827 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838830 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838833 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838836 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838840 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838843 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838852 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838855 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838858 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838861 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838864 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838868 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838871 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838876 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838880 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838883 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838886 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838889 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838895 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838899 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838903 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838939 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838943 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838947 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838951 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838954 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838957 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838961 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838964 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838967 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838971 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838975 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838979 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838982 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838985 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838988 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838991 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.838994 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840407 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840415 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840419 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840426 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840430 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840434 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840437 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840441 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840444 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840447 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840451 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840454 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840457 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840461 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840464 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840470 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840473 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840477 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840481 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840484 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840488 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840491 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840494 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840498 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840501 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840504 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840508 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840511 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840516 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840519 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840523 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840526 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840530 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840565 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840569 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840572 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840576 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840579 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840582 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840585 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840589 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840595 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840598 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840601 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840605 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840608 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840612 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840617 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840620 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840624 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840627 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840631 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840634 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840641 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840644 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840648 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840651 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840654 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840658 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840661 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840664 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840668 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840671 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840674 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840678 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840681 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840716 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840721 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840725 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840728 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840731 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840735 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840738 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840741 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840745 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840748 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840751 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840759 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840762 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840776 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840811 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840816 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840822 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840827 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840830 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.840834 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841181 5120 flags.go:64] FLAG: --address="0.0.0.0" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841193 5120 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841219 5120 flags.go:64] FLAG: --anonymous-auth="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841224 5120 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841265 5120 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841269 5120 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841275 5120 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841285 5120 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841290 5120 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841294 5120 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841298 5120 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841302 5120 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841306 5120 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841350 5120 flags.go:64] FLAG: --cgroup-root="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841357 5120 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841366 5120 flags.go:64] FLAG: --client-ca-file="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841371 5120 flags.go:64] FLAG: --cloud-config="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841376 5120 flags.go:64] FLAG: --cloud-provider="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841381 5120 flags.go:64] FLAG: --cluster-dns="[]" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841403 5120 flags.go:64] FLAG: --cluster-domain="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841407 5120 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841411 5120 flags.go:64] FLAG: --config-dir="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841415 5120 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841419 5120 flags.go:64] FLAG: --container-log-max-files="5" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841429 5120 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841433 5120 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841438 5120 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841442 5120 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841446 5120 flags.go:64] FLAG: --contention-profiling="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841450 5120 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841454 5120 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841458 5120 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841468 5120 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841475 5120 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841480 5120 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841486 5120 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841490 5120 flags.go:64] FLAG: --enable-load-reader="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841495 5120 flags.go:64] FLAG: --enable-server="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841499 5120 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841525 5120 flags.go:64] FLAG: --event-burst="100" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841530 5120 flags.go:64] FLAG: --event-qps="50" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841538 5120 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841543 5120 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841547 5120 flags.go:64] FLAG: --eviction-hard="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841553 5120 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841557 5120 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841616 5120 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841626 5120 flags.go:64] FLAG: --eviction-soft="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841631 5120 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841638 5120 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841642 5120 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841646 5120 flags.go:64] FLAG: --experimental-mounter-path="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841654 5120 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841659 5120 flags.go:64] FLAG: --fail-swap-on="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841663 5120 flags.go:64] FLAG: --feature-gates="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841668 5120 flags.go:64] FLAG: --file-check-frequency="20s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841673 5120 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841685 5120 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841689 5120 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841705 5120 flags.go:64] FLAG: --healthz-port="10248" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841713 5120 flags.go:64] FLAG: --help="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841721 5120 flags.go:64] FLAG: --hostname-override="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841730 5120 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841734 5120 flags.go:64] FLAG: --http-check-frequency="20s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841738 5120 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841743 5120 flags.go:64] FLAG: --image-credential-provider-config="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841751 5120 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841756 5120 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841760 5120 flags.go:64] FLAG: --image-service-endpoint="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841764 5120 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841769 5120 flags.go:64] FLAG: --kube-api-burst="100" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841777 5120 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841782 5120 flags.go:64] FLAG: --kube-api-qps="50" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841786 5120 flags.go:64] FLAG: --kube-reserved="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841794 5120 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841799 5120 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841803 5120 flags.go:64] FLAG: --kubelet-cgroups="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841807 5120 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841811 5120 flags.go:64] FLAG: --lock-file="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841869 5120 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841875 5120 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841883 5120 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841895 5120 flags.go:64] FLAG: --log-json-split-stream="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841899 5120 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841902 5120 flags.go:64] FLAG: --log-text-split-stream="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841906 5120 flags.go:64] FLAG: --logging-format="text" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841910 5120 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841915 5120 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841919 5120 flags.go:64] FLAG: --manifest-url="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841923 5120 flags.go:64] FLAG: --manifest-url-header="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.841932 5120 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842030 5120 flags.go:64] FLAG: --max-open-files="1000000" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842047 5120 flags.go:64] FLAG: --max-pods="110" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842057 5120 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842067 5120 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842075 5120 flags.go:64] FLAG: --memory-manager-policy="None" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842086 5120 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842094 5120 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842102 5120 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842117 5120 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842143 5120 flags.go:64] FLAG: --node-status-max-images="50" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842169 5120 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842178 5120 flags.go:64] FLAG: --oom-score-adj="-999" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842185 5120 flags.go:64] FLAG: --pod-cidr="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842192 5120 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842211 5120 flags.go:64] FLAG: --pod-manifest-path="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842219 5120 flags.go:64] FLAG: --pod-max-pids="-1" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842227 5120 flags.go:64] FLAG: --pods-per-core="0" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842235 5120 flags.go:64] FLAG: --port="10250" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842245 5120 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842257 5120 flags.go:64] FLAG: --provider-id="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842266 5120 flags.go:64] FLAG: --qos-reserved="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842285 5120 flags.go:64] FLAG: --read-only-port="10255" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842294 5120 flags.go:64] FLAG: --register-node="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842505 5120 flags.go:64] FLAG: --register-schedulable="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842760 5120 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842779 5120 flags.go:64] FLAG: --registry-burst="10" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842786 5120 flags.go:64] FLAG: --registry-qps="5" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842790 5120 flags.go:64] FLAG: --reserved-cpus="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842797 5120 flags.go:64] FLAG: --reserved-memory="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842804 5120 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842809 5120 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842814 5120 flags.go:64] FLAG: --rotate-certificates="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842818 5120 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842822 5120 flags.go:64] FLAG: --runonce="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842826 5120 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842830 5120 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842835 5120 flags.go:64] FLAG: --seccomp-default="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842839 5120 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842844 5120 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842849 5120 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842853 5120 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842857 5120 flags.go:64] FLAG: --storage-driver-password="root" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842861 5120 flags.go:64] FLAG: --storage-driver-secure="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842865 5120 flags.go:64] FLAG: --storage-driver-table="stats" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842869 5120 flags.go:64] FLAG: --storage-driver-user="root" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842874 5120 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842878 5120 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842883 5120 flags.go:64] FLAG: --system-cgroups="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842887 5120 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842898 5120 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842903 5120 flags.go:64] FLAG: --tls-cert-file="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842907 5120 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842947 5120 flags.go:64] FLAG: --tls-min-version="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842951 5120 flags.go:64] FLAG: --tls-private-key-file="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842955 5120 flags.go:64] FLAG: --topology-manager-policy="none" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842961 5120 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842968 5120 flags.go:64] FLAG: --topology-manager-scope="container" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842973 5120 flags.go:64] FLAG: --v="2" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842980 5120 flags.go:64] FLAG: --version="false" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842986 5120 flags.go:64] FLAG: --vmodule="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842992 5120 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.842997 5120 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843175 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843180 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843184 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843190 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843193 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843196 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843200 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843204 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843207 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843211 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843214 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843217 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843220 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843224 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843227 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843230 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843234 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843237 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843241 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843244 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843247 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843250 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843254 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843257 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843261 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843265 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843274 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843277 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843280 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843284 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843287 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843290 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843293 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843296 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843300 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843304 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843307 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843310 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843313 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843317 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843320 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843324 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843327 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843330 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843335 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843341 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843345 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843348 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843352 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843355 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843359 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843362 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843365 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843369 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843372 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843376 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843380 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843384 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843389 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843393 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843396 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843400 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843403 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843406 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843410 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843414 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843417 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843421 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843425 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843428 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843433 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843437 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843440 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843444 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843447 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843450 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843453 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843457 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843460 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843463 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843466 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843469 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843473 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843476 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843479 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.843482 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.843497 5120 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.854175 5120 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.854216 5120 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854289 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854297 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854303 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854309 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854315 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854320 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854325 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854331 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854336 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854341 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854346 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854351 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854356 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854363 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854367 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854372 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854377 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854382 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854387 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854392 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854397 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854402 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854406 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854411 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854416 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854421 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854426 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854430 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854435 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854440 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854444 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854449 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854454 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854458 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854464 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854469 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854473 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854478 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854483 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854488 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854508 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854522 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854533 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854539 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854544 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854550 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854557 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854562 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854573 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854578 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854583 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854588 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854592 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854597 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854602 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854607 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854612 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854617 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854622 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854627 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854632 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854636 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854641 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854646 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854651 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854656 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854661 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854666 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854670 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854675 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854679 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854686 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854692 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854698 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854704 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854709 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854714 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854719 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854723 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854731 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854737 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854742 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854747 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854752 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854756 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854762 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.854772 5120 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854974 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854983 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854989 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.854994 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855000 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855005 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855010 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855015 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855020 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855025 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855030 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855035 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855040 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855046 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855051 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855057 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855065 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855071 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855078 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855084 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855090 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855096 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855101 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855107 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855114 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855123 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855128 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855132 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855137 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855142 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855147 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855178 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855185 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855191 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855197 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855202 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855207 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855212 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855217 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855222 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855226 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855232 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855237 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855241 5120 feature_gate.go:328] unrecognized feature gate: Example2 Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855246 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855251 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855256 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855261 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855266 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855270 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855276 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855282 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855286 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855291 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855296 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855301 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855306 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855311 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855317 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855323 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855327 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855332 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855337 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855342 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855346 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855351 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855356 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855361 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855366 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855371 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855376 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855381 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855386 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855390 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855395 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855402 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855408 5120 feature_gate.go:328] unrecognized feature gate: Example Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855414 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855419 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855424 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855429 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855435 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855442 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855448 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855453 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 11 16:00:50 crc kubenswrapper[5120]: W1211 16:00:50.855458 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.855466 5120 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.855801 5120 server.go:962] "Client rotation is on, will bootstrap in background" Dec 11 16:00:50 crc kubenswrapper[5120]: E1211 16:00:50.858183 5120 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.861111 5120 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.861215 5120 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.861577 5120 server.go:1019] "Starting client certificate rotation" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.861737 5120 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.861819 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.866326 5120 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.868947 5120 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 16:00:50 crc kubenswrapper[5120]: E1211 16:00:50.869620 5120 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.877472 5120 log.go:25] "Validated CRI v1 runtime API" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.899576 5120 log.go:25] "Validated CRI v1 image API" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.901054 5120 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.903734 5120 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-11-15-54-38-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.903811 5120 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.931686 5120 manager.go:217] Machine: {Timestamp:2025-12-11 16:00:50.929530596 +0000 UTC m=+0.183833997 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:07ea2ba6-937b-4347-9d9b-4ade3aaec959 BootID:93660043-3b1f-49ac-bde1-adfbb3f6633e Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4a:10:a4 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4a:10:a4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9a:fc:01 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:30:f7:b6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:87:12:5d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:22:05:1e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ae:3f:2a:9f:2a:a9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:46:ea:21:e4:20:56 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.932252 5120 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.932432 5120 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.933626 5120 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.933679 5120 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.933862 5120 topology_manager.go:138] "Creating topology manager with none policy" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.933873 5120 container_manager_linux.go:306] "Creating device plugin manager" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.933892 5120 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.934357 5120 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.934774 5120 state_mem.go:36] "Initialized new in-memory state store" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.934922 5120 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.935406 5120 kubelet.go:491] "Attempting to sync node with API server" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.935543 5120 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.935571 5120 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.935584 5120 kubelet.go:397] "Adding apiserver pod source" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.935602 5120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.937766 5120 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.937786 5120 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 11 16:00:50 crc kubenswrapper[5120]: E1211 16:00:50.937994 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:00:50 crc kubenswrapper[5120]: E1211 16:00:50.938035 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.939400 5120 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.939418 5120 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.940850 5120 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.941075 5120 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.941578 5120 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942008 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942039 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942048 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942060 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942071 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942080 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942110 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942126 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942237 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942268 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942310 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.942457 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.943205 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.944083 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.944364 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.955090 5120 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.955399 5120 server.go:1295] "Started kubelet" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.955630 5120 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.955749 5120 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.955893 5120 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.956357 5120 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 11 16:00:50 crc systemd[1]: Started Kubernetes Kubelet. Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.957948 5120 server.go:317] "Adding debug handlers to kubelet server" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.958320 5120 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.958325 5120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 11 16:00:50 crc kubenswrapper[5120]: E1211 16:00:50.957549 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18803490eb35f4fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:50.955351291 +0000 UTC m=+0.209654632,LastTimestamp:2025-12-11 16:00:50.955351291 +0000 UTC m=+0.209654632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960021 5120 factory.go:55] Registering systemd factory Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960072 5120 factory.go:223] Registration of the systemd container factory successfully Dec 11 16:00:50 crc kubenswrapper[5120]: E1211 16:00:50.960178 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:00:50 crc kubenswrapper[5120]: E1211 16:00:50.960300 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="200ms" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960372 5120 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960393 5120 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960400 5120 factory.go:153] Registering CRI-O factory Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960419 5120 factory.go:223] Registration of the crio container factory successfully Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960489 5120 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960514 5120 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960518 5120 factory.go:103] Registering Raw factory Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.960556 5120 manager.go:1196] Started watching for new ooms in manager Dec 11 16:00:50 crc kubenswrapper[5120]: E1211 16:00:50.960668 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.961585 5120 manager.go:319] Starting recovery of all containers Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.985688 5120 manager.go:324] Recovery completed Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996249 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996590 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996606 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996618 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996630 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996642 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996653 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996664 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996677 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996688 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996701 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996712 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996737 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996749 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996765 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996777 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996802 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996845 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996859 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996870 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996882 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996895 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996906 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996918 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996940 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996953 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996964 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.996976 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.997004 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.997016 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.997027 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.997052 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.998386 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999485 5120 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999537 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999555 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999566 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999577 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999593 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999604 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999615 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999629 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999641 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999655 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999668 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999680 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999692 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999704 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999716 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999730 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999742 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999755 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999767 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999779 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999793 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999805 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999818 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 11 16:00:50 crc kubenswrapper[5120]: I1211 16:00:50.999839 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999859 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999871 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999884 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999895 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999908 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999925 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999939 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999952 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999964 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999975 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999986 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:50.999997 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000009 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000021 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000033 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000045 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000061 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000072 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000084 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000096 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000108 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000121 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000132 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000145 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000178 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000189 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000201 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000213 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000225 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000236 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000247 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000259 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000270 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000282 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000295 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000307 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000319 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000330 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000343 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000360 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000371 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000382 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000393 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000406 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000417 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000429 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000441 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000457 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000469 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000481 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000493 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000508 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000520 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000531 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000543 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000556 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000582 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000594 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000605 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000617 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000628 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000641 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000654 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000665 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000700 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000712 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000724 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000736 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000747 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000759 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000771 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000783 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000794 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000806 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000817 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000829 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000840 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000852 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000865 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000882 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000895 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000906 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000917 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000931 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000943 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000954 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000964 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000976 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000987 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.000998 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001009 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001022 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001033 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001045 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001056 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001068 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001113 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001124 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001136 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001164 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001182 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001193 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001204 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001217 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001228 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001240 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001251 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001262 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001273 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001289 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001300 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001312 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001323 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001336 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001348 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001363 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001375 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001385 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001397 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001410 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001427 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001438 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001450 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001462 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001473 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001485 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001496 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001507 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001536 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001547 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001559 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001571 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001582 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001593 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001606 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001620 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001631 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001642 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001654 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001665 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001677 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001688 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001700 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001711 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001723 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001735 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001748 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001761 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001772 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001783 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001795 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001808 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001819 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001831 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001844 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001856 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001868 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001880 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001891 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001903 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001930 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001941 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001953 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001964 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001975 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001987 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.001999 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.002012 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.002094 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.002105 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.002116 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.002128 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.002138 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003126 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003320 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003342 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003866 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003915 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003922 5120 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003931 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003948 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003964 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003977 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003990 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.004002 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.004015 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.004032 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.004046 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.004060 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.004072 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.004084 5120 reconstruct.go:97] "Volume reconstruction finished" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.004092 5120 reconciler.go:26] "Reconciler: start to sync state" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.003934 5120 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.004700 5120 state_mem.go:36] "Initialized new in-memory state store" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.010752 5120 policy_none.go:49] "None policy: Start" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.010784 5120 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.010799 5120 state_mem.go:35] "Initializing new in-memory state store" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.018609 5120 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.020185 5120 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.020434 5120 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.020472 5120 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.020480 5120 kubelet.go:2451] "Starting kubelet main sync loop" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.020513 5120 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.021923 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.060265 5120 manager.go:341] "Starting Device Plugin manager" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.060518 5120 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.060534 5120 server.go:85] "Starting device plugin registration server" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.060628 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.061050 5120 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.061071 5120 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.061445 5120 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.061542 5120 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.061553 5120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.064352 5120 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.064388 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.121352 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.121540 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.122590 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.122620 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.122629 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.123950 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.124409 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.124716 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.124733 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.124748 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.124764 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.125629 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.125680 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.125738 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.125753 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.125887 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.125913 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.126321 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.126360 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.126372 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.126384 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.126392 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.126431 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.127809 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.127915 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.127955 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.128322 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.128348 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.128361 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.128388 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.128410 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.128422 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.129090 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.129162 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.129191 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.129528 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.129550 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.129562 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.129581 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.129596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.129605 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.130238 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.130270 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.130637 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.130664 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.130676 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.148000 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.152866 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.161016 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="400ms" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.162171 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.162952 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.162986 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.162996 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.163018 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.163460 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.171483 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.197253 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.203281 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207344 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207529 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207558 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207576 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207590 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207604 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207619 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207633 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207651 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207667 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207804 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207844 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207889 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207911 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207959 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.207998 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208018 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208053 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208073 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208096 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208103 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208112 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208121 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208174 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208203 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208286 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208656 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208754 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208804 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.208914 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309551 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309596 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309618 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309637 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309652 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309668 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309684 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309703 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309698 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309757 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309780 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309819 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309821 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309824 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309795 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309841 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309820 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309850 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309942 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309966 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.309990 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310001 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310025 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310028 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310005 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310074 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310102 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310125 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310004 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310142 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.310269 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.364143 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.365335 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.365382 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.365394 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.365416 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.365908 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.448996 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.453513 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.472765 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: W1211 16:00:51.481216 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-69847855bf5d517067cd029008d2926dfec8733df9ce95c2a37194d1ebd461ed WatchSource:0}: Error finding container 69847855bf5d517067cd029008d2926dfec8733df9ce95c2a37194d1ebd461ed: Status 404 returned error can't find the container with id 69847855bf5d517067cd029008d2926dfec8733df9ce95c2a37194d1ebd461ed Dec 11 16:00:51 crc kubenswrapper[5120]: W1211 16:00:51.483389 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-140ebb8835e0ec649ecbd6f2a3a5c68c8c98eb339143bd97b1e265f41031a195 WatchSource:0}: Error finding container 140ebb8835e0ec649ecbd6f2a3a5c68c8c98eb339143bd97b1e265f41031a195: Status 404 returned error can't find the container with id 140ebb8835e0ec649ecbd6f2a3a5c68c8c98eb339143bd97b1e265f41031a195 Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.486800 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.497771 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.504437 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.562103 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="800ms" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.766952 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.767904 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.767948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.767961 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.767986 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.768461 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.12:6443: connect: connection refused" node="crc" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.826748 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:00:51 crc kubenswrapper[5120]: E1211 16:00:51.838441 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:00:51 crc kubenswrapper[5120]: I1211 16:00:51.945103 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.12:6443: connect: connection refused Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.025309 5120 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="0e1727a06a67899b8a0b64428ed30d3aeba0e5847e8d0d68587d3861df6686a4" exitCode=0 Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.025419 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"0e1727a06a67899b8a0b64428ed30d3aeba0e5847e8d0d68587d3861df6686a4"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.025467 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c7ed523fe31cb15b35c2381de69e6b9c3e58d6e9ccdf24e399d6bd17f96572ed"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.025582 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.026226 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.026258 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.026268 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:52 crc kubenswrapper[5120]: E1211 16:00:52.026439 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.026899 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6638d34ff1843e072e0e07aee8955f1642fc6ed722b30a744affaca24191a467"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.026964 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1c66bd25eb2b2aacf5bce684d9e45c8e77a860b431c6dcbef641283bcc6628a8"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.028424 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d" exitCode=0 Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.028488 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.028508 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"36424d20b11a937f9eed00b053b9698dddfe75a317924990ef89d6f6786d3c08"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.028602 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.029024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.029052 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.029061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:52 crc kubenswrapper[5120]: E1211 16:00:52.029276 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.030121 5120 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="7ea7d8d5242e079f06a3da14aa39458183802880ada73a3fe2cf37ef44cf670a" exitCode=0 Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.030199 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"7ea7d8d5242e079f06a3da14aa39458183802880ada73a3fe2cf37ef44cf670a"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.030237 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"69847855bf5d517067cd029008d2926dfec8733df9ce95c2a37194d1ebd461ed"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.030405 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.031161 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.031199 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.031211 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:52 crc kubenswrapper[5120]: E1211 16:00:52.031432 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.037477 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.037492 5120 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="474ce51a7861c5bc5feff90183d2f0f8119cb78f6664cf308f3bddac9d48a54d" exitCode=0 Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.037537 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"474ce51a7861c5bc5feff90183d2f0f8119cb78f6664cf308f3bddac9d48a54d"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.037564 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"140ebb8835e0ec649ecbd6f2a3a5c68c8c98eb339143bd97b1e265f41031a195"} Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.037661 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.038265 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.038288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.038344 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.038353 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.038295 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.038612 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:52 crc kubenswrapper[5120]: E1211 16:00:52.038789 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:52 crc kubenswrapper[5120]: E1211 16:00:52.097477 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:00:52 crc kubenswrapper[5120]: E1211 16:00:52.363628 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="1.6s" Dec 11 16:00:52 crc kubenswrapper[5120]: E1211 16:00:52.495952 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.569531 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.570289 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.570342 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.570356 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.570389 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:00:52 crc kubenswrapper[5120]: I1211 16:00:52.909433 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.041996 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"933b951772476b44d81cc1de5e7dd03c3072133c75320a7bf83b29860a415903"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.042041 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"766ca116d6ac3b5109c2570b2f99d2796613fc2f98455165da04bfbc978569b6"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.042051 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"fd34a347536538da804d4f3d1109839f72d4f80298ad8729c47a279d337b0347"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.042196 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.042744 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.042772 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.042782 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:53 crc kubenswrapper[5120]: E1211 16:00:53.042956 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.044827 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5b00a2ce5994f26ef1f9442d7f86eddb570db24aaf6bd4cf8d6d7d6017ca6cfe"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.044855 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b96a0ce7cbc1b5b30172cc8d635a0f6a38edd9ab4341a5ced498d731a842557c"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.044866 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d4c1a7b286db7da7451dd4868274f3e5a6591db27811aa67406f2d0b83001d0f"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.044978 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.045469 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.045493 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.045503 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:53 crc kubenswrapper[5120]: E1211 16:00:53.045641 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.048672 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.048722 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.048736 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.050491 5120 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="1a762cd77cd8e263859222842615426dc67c1d88b009dfdb8e95b1dd46052d25" exitCode=0 Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.050553 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"1a762cd77cd8e263859222842615426dc67c1d88b009dfdb8e95b1dd46052d25"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.056548 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.057184 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.057214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.057227 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:53 crc kubenswrapper[5120]: E1211 16:00:53.057479 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.058521 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"7744b267a54d739ebcbcd536b7bb137bc74d46f834b8eb3fc5606c29d78b2715"} Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.058660 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.060550 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.060595 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:53 crc kubenswrapper[5120]: I1211 16:00:53.060608 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:53 crc kubenswrapper[5120]: E1211 16:00:53.060820 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.064876 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6e582719820a7b943cc30e7e5b8112d7aa05f8d57fc6dda984ecfe6b833ef3e3"} Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.064941 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97"} Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.065132 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.065944 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.066009 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.066033 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:54 crc kubenswrapper[5120]: E1211 16:00:54.066430 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.067202 5120 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="101ba2a0e7f8e531e25d62b429ef52fc025f1d4dd85a8a99292dc727617bdc7b" exitCode=0 Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.067248 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"101ba2a0e7f8e531e25d62b429ef52fc025f1d4dd85a8a99292dc727617bdc7b"} Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.067389 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.067485 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.067987 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.068044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.068068 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.068140 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.068402 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.068455 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:54 crc kubenswrapper[5120]: E1211 16:00:54.068573 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:54 crc kubenswrapper[5120]: E1211 16:00:54.068817 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.139272 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.296079 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.296327 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.296987 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.297022 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:54 crc kubenswrapper[5120]: I1211 16:00:54.297035 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:54 crc kubenswrapper[5120]: E1211 16:00:54.297339 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.072686 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"8646f512b9353647d4931acafb053b650dae9e1b7f9414fda76dcbdfa278b53f"} Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.072794 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.073032 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"a88d2b9d7e1fdfd36ac0c9d78c11e05bc4c426d388675ade7675da768123d7b3"} Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.073048 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"58e6c0d1cb74646226154f4f31743874a8dea02e5529fecbdf8c6bbf0cf109d6"} Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.073058 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f0702dcea334f0b923b7278d7a558f48faf3eddeb3ff31dd35a839d4efe35d69"} Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.073067 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d641e9eddc098744fc7c612b4ede6f0e8eef1b5aba7b4a20d5b9b53791dc007c"} Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.072846 5120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.073632 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.073755 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.075959 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.076439 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.076456 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.076491 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.076503 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.076513 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.076522 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.076532 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.076495 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:55 crc kubenswrapper[5120]: E1211 16:00:55.076857 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:55 crc kubenswrapper[5120]: E1211 16:00:55.077113 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:55 crc kubenswrapper[5120]: E1211 16:00:55.077273 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:55 crc kubenswrapper[5120]: I1211 16:00:55.307576 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.074622 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.075313 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.075367 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.075384 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:56 crc kubenswrapper[5120]: E1211 16:00:56.075817 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.115550 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.115720 5120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.115757 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.116795 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.116854 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:56 crc kubenswrapper[5120]: I1211 16:00:56.116880 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:56 crc kubenswrapper[5120]: E1211 16:00:56.117474 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.014956 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.015242 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.016047 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.016077 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.016089 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:57 crc kubenswrapper[5120]: E1211 16:00:57.016549 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.238909 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.239123 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.240020 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.240070 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:57 crc kubenswrapper[5120]: I1211 16:00:57.240081 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:57 crc kubenswrapper[5120]: E1211 16:00:57.240431 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:58 crc kubenswrapper[5120]: I1211 16:00:58.308361 5120 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 11 16:00:58 crc kubenswrapper[5120]: I1211 16:00:58.308430 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 11 16:00:58 crc kubenswrapper[5120]: I1211 16:00:58.878966 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:58 crc kubenswrapper[5120]: I1211 16:00:58.879180 5120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 16:00:58 crc kubenswrapper[5120]: I1211 16:00:58.879225 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:58 crc kubenswrapper[5120]: I1211 16:00:58.880397 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:58 crc kubenswrapper[5120]: I1211 16:00:58.880444 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:58 crc kubenswrapper[5120]: I1211 16:00:58.880454 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:58 crc kubenswrapper[5120]: E1211 16:00:58.880813 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.324088 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.324169 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.324369 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.324427 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.325910 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.325964 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.325991 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.326074 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.326175 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.326214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:00:59 crc kubenswrapper[5120]: E1211 16:00:59.326438 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:59 crc kubenswrapper[5120]: E1211 16:00:59.326810 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:00:59 crc kubenswrapper[5120]: I1211 16:00:59.333798 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:01:00 crc kubenswrapper[5120]: I1211 16:01:00.083266 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:00 crc kubenswrapper[5120]: I1211 16:01:00.083916 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:00 crc kubenswrapper[5120]: I1211 16:01:00.083955 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:00 crc kubenswrapper[5120]: I1211 16:01:00.083964 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:00 crc kubenswrapper[5120]: E1211 16:01:00.084288 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:01 crc kubenswrapper[5120]: E1211 16:01:01.064623 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:01:02 crc kubenswrapper[5120]: E1211 16:01:02.571775 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 11 16:01:02 crc kubenswrapper[5120]: E1211 16:01:02.911527 5120 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 11 16:01:02 crc kubenswrapper[5120]: I1211 16:01:02.946344 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.576260 5120 trace.go:236] Trace[1796680729]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 16:00:53.574) (total time: 10001ms): Dec 11 16:01:03 crc kubenswrapper[5120]: Trace[1796680729]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:01:03.576) Dec 11 16:01:03 crc kubenswrapper[5120]: Trace[1796680729]: [10.00144469s] [10.00144469s] END Dec 11 16:01:03 crc kubenswrapper[5120]: E1211 16:01:03.576307 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.677822 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.677933 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.788517 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.788769 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.789921 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.789985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.790001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:03 crc kubenswrapper[5120]: E1211 16:01:03.790658 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.884187 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]log ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]etcd ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/priority-and-fairness-filter ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-apiextensions-informers ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-apiextensions-controllers ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/crd-informer-synced ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-system-namespaces-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 11 16:01:03 crc kubenswrapper[5120]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 11 16:01:03 crc kubenswrapper[5120]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/bootstrap-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/start-kube-aggregator-informers ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/apiservice-registration-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/apiservice-discovery-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]autoregister-completion ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/apiservice-openapi-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 11 16:01:03 crc kubenswrapper[5120]: livez check failed Dec 11 16:01:03 crc kubenswrapper[5120]: I1211 16:01:03.884296 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:01:03 crc kubenswrapper[5120]: E1211 16:01:03.964870 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 11 16:01:04 crc kubenswrapper[5120]: I1211 16:01:04.171908 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:04 crc kubenswrapper[5120]: I1211 16:01:04.172778 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:04 crc kubenswrapper[5120]: I1211 16:01:04.172817 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:04 crc kubenswrapper[5120]: I1211 16:01:04.172829 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:04 crc kubenswrapper[5120]: I1211 16:01:04.172854 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:01:06 crc kubenswrapper[5120]: I1211 16:01:06.989290 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 11 16:01:07 crc kubenswrapper[5120]: I1211 16:01:07.001661 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 11 16:01:07 crc kubenswrapper[5120]: I1211 16:01:07.155092 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 16:01:07 crc kubenswrapper[5120]: I1211 16:01:07.155216 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 16:01:07 crc kubenswrapper[5120]: E1211 16:01:07.170837 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.308121 5120 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.308230 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.678924 5120 trace.go:236] Trace[327803220]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 16:00:55.223) (total time: 13455ms): Dec 11 16:01:08 crc kubenswrapper[5120]: Trace[327803220]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 13455ms (16:01:08.678) Dec 11 16:01:08 crc kubenswrapper[5120]: Trace[327803220]: [13.45566828s] [13.45566828s] END Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.679001 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.679685 5120 trace.go:236] Trace[840983521]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 16:00:54.486) (total time: 14193ms): Dec 11 16:01:08 crc kubenswrapper[5120]: Trace[840983521]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14193ms (16:01:08.679) Dec 11 16:01:08 crc kubenswrapper[5120]: Trace[840983521]: [14.19311822s] [14.19311822s] END Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.679745 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.679692 5120 trace.go:236] Trace[2088515402]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 16:00:53.791) (total time: 14888ms): Dec 11 16:01:08 crc kubenswrapper[5120]: Trace[2088515402]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14888ms (16:01:08.679) Dec 11 16:01:08 crc kubenswrapper[5120]: Trace[2088515402]: [14.888429886s] [14.888429886s] END Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.679775 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.679862 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490eb35f4fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:50.955351291 +0000 UTC m=+0.209654632,LastTimestamp:2025-12-11 16:00:50.955351291 +0000 UTC m=+0.209654632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.681500 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee118290 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003294352 +0000 UTC m=+0.257597713,LastTimestamp:2025-12-11 16:00:51.003294352 +0000 UTC m=+0.257597713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.683306 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee12172b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003332395 +0000 UTC m=+0.257635746,LastTimestamp:2025-12-11 16:00:51.003332395 +0000 UTC m=+0.257635746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.690059 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee125aac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003349676 +0000 UTC m=+0.257653017,LastTimestamp:2025-12-11 16:00:51.003349676 +0000 UTC m=+0.257653017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.697965 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490f1b0062e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.064014382 +0000 UTC m=+0.318317713,LastTimestamp:2025-12-11 16:00:51.064014382 +0000 UTC m=+0.318317713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.709734 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee118290\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee118290 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003294352 +0000 UTC m=+0.257597713,LastTimestamp:2025-12-11 16:00:51.122605721 +0000 UTC m=+0.376909052,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.715836 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee12172b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee12172b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003332395 +0000 UTC m=+0.257635746,LastTimestamp:2025-12-11 16:00:51.122625493 +0000 UTC m=+0.376928824,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.721535 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee125aac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee125aac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003349676 +0000 UTC m=+0.257653017,LastTimestamp:2025-12-11 16:00:51.122633363 +0000 UTC m=+0.376936694,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.727677 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee118290\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee118290 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003294352 +0000 UTC m=+0.257597713,LastTimestamp:2025-12-11 16:00:51.124733121 +0000 UTC m=+0.379036452,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.736587 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee12172b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee12172b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003332395 +0000 UTC m=+0.257635746,LastTimestamp:2025-12-11 16:00:51.124759173 +0000 UTC m=+0.379062504,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.745207 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee125aac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee125aac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003349676 +0000 UTC m=+0.257653017,LastTimestamp:2025-12-11 16:00:51.124773304 +0000 UTC m=+0.379076635,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.753307 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee118290\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee118290 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003294352 +0000 UTC m=+0.257597713,LastTimestamp:2025-12-11 16:00:51.12572402 +0000 UTC m=+0.380027361,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.760647 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee12172b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee12172b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003332395 +0000 UTC m=+0.257635746,LastTimestamp:2025-12-11 16:00:51.125745582 +0000 UTC m=+0.380048923,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.765828 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee125aac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee125aac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003349676 +0000 UTC m=+0.257653017,LastTimestamp:2025-12-11 16:00:51.125759873 +0000 UTC m=+0.380063214,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.773916 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee118290\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee118290 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003294352 +0000 UTC m=+0.257597713,LastTimestamp:2025-12-11 16:00:51.126341214 +0000 UTC m=+0.380644545,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.781277 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee12172b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee12172b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003332395 +0000 UTC m=+0.257635746,LastTimestamp:2025-12-11 16:00:51.126368946 +0000 UTC m=+0.380672277,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.787672 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee118290\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee118290 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003294352 +0000 UTC m=+0.257597713,LastTimestamp:2025-12-11 16:00:51.126385137 +0000 UTC m=+0.380688468,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.794452 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee125aac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee125aac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003349676 +0000 UTC m=+0.257653017,LastTimestamp:2025-12-11 16:00:51.126391707 +0000 UTC m=+0.380695038,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.800314 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee12172b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee12172b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003332395 +0000 UTC m=+0.257635746,LastTimestamp:2025-12-11 16:00:51.12642436 +0000 UTC m=+0.380727691,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.808116 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee125aac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee125aac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003349676 +0000 UTC m=+0.257653017,LastTimestamp:2025-12-11 16:00:51.126455422 +0000 UTC m=+0.380758753,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.814871 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee118290\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee118290 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003294352 +0000 UTC m=+0.257597713,LastTimestamp:2025-12-11 16:00:51.128339464 +0000 UTC m=+0.382642805,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.823427 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee12172b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee12172b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003332395 +0000 UTC m=+0.257635746,LastTimestamp:2025-12-11 16:00:51.128355375 +0000 UTC m=+0.382658706,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.829481 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee125aac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee125aac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003349676 +0000 UTC m=+0.257653017,LastTimestamp:2025-12-11 16:00:51.128367736 +0000 UTC m=+0.382671067,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.837343 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee118290\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee118290 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003294352 +0000 UTC m=+0.257597713,LastTimestamp:2025-12-11 16:00:51.128399429 +0000 UTC m=+0.382702760,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.844765 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18803490ee12172b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18803490ee12172b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.003332395 +0000 UTC m=+0.257635746,LastTimestamp:2025-12-11 16:00:51.12841623 +0000 UTC m=+0.382719561,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.851888 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188034910ae9ee7c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.487239804 +0000 UTC m=+0.741543135,LastTimestamp:2025-12-11 16:00:51.487239804 +0000 UTC m=+0.741543135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.858649 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188034910aeb47f1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.487328241 +0000 UTC m=+0.741631572,LastTimestamp:2025-12-11 16:00:51.487328241 +0000 UTC m=+0.741631572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.866314 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034910b88ddb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.497655731 +0000 UTC m=+0.751959062,LastTimestamp:2025-12-11 16:00:51.497655731 +0000 UTC m=+0.751959062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.873329 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188034910c837593 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.514078611 +0000 UTC m=+0.768381942,LastTimestamp:2025-12-11 16:00:51.514078611 +0000 UTC m=+0.768381942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.878776 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188034910da2142f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.532862511 +0000 UTC m=+0.787165842,LastTimestamp:2025-12-11 16:00:51.532862511 +0000 UTC m=+0.787165842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.885867 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.886248 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.887466 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880349126aa5330 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.952833328 +0000 UTC m=+1.207136659,LastTimestamp:2025-12-11 16:00:51.952833328 +0000 UTC m=+1.207136659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.887828 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.887871 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.887882 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.888304 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.891493 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.894591 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880349126d61974 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.955702132 +0000 UTC m=+1.210005463,LastTimestamp:2025-12-11 16:00:51.955702132 +0000 UTC m=+1.210005463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.901708 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1880349126d634c3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.955709123 +0000 UTC m=+1.210012454,LastTimestamp:2025-12-11 16:00:51.955709123 +0000 UTC m=+1.210012454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.903298 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880349126e4eafb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.956673275 +0000 UTC m=+1.210976606,LastTimestamp:2025-12-11 16:00:51.956673275 +0000 UTC m=+1.210976606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.908301 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880349126e51d7e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.956686206 +0000 UTC m=+1.210989537,LastTimestamp:2025-12-11 16:00:51.956686206 +0000 UTC m=+1.210989537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.912761 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18803491272d4557 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.961414999 +0000 UTC m=+1.215718330,LastTimestamp:2025-12-11 16:00:51.961414999 +0000 UTC m=+1.215718330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.918728 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880349127592348 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.964289864 +0000 UTC m=+1.218593185,LastTimestamp:2025-12-11 16:00:51.964289864 +0000 UTC m=+1.218593185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.927229 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880349127723a36 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.965934134 +0000 UTC m=+1.220237465,LastTimestamp:2025-12-11 16:00:51.965934134 +0000 UTC m=+1.220237465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.933211 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1880349127d8a9bc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.972647356 +0000 UTC m=+1.226950687,LastTimestamp:2025-12-11 16:00:51.972647356 +0000 UTC m=+1.226950687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.940446 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880349127ef0d29 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.974114601 +0000 UTC m=+1.228417932,LastTimestamp:2025-12-11 16:00:51.974114601 +0000 UTC m=+1.228417932,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.946759 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034912878be2d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:51.983138349 +0000 UTC m=+1.237441680,LastTimestamp:2025-12-11 16:00:51.983138349 +0000 UTC m=+1.237441680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: I1211 16:01:08.952066 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.952046 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188034912b1cbe72 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.027440754 +0000 UTC m=+1.281744085,LastTimestamp:2025-12-11 16:00:52.027440754 +0000 UTC m=+1.281744085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.958442 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034912bb21926 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.037228838 +0000 UTC m=+1.291532169,LastTimestamp:2025-12-11 16:00:52.037228838 +0000 UTC m=+1.291532169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.963628 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188034912bb31520 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.037293344 +0000 UTC m=+1.291596675,LastTimestamp:2025-12-11 16:00:52.037293344 +0000 UTC m=+1.291596675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.969375 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188034912c16ff34 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.043841332 +0000 UTC m=+1.298144663,LastTimestamp:2025-12-11 16:00:52.043841332 +0000 UTC m=+1.298144663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.974210 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880349136a9fc41 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.221246529 +0000 UTC m=+1.475549860,LastTimestamp:2025-12-11 16:00:52.221246529 +0000 UTC m=+1.475549860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.990540 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18803491375fc878 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.233160824 +0000 UTC m=+1.487464155,LastTimestamp:2025-12-11 16:00:52.233160824 +0000 UTC m=+1.487464155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:08 crc kubenswrapper[5120]: E1211 16:01:08.996498 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880349137a01017 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.237373463 +0000 UTC m=+1.491676794,LastTimestamp:2025-12-11 16:00:52.237373463 +0000 UTC m=+1.491676794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.003781 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880349137aed424 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.238341156 +0000 UTC m=+1.492644477,LastTimestamp:2025-12-11 16:00:52.238341156 +0000 UTC m=+1.492644477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.010865 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880349138e266f5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.258498293 +0000 UTC m=+1.512801624,LastTimestamp:2025-12-11 16:00:52.258498293 +0000 UTC m=+1.512801624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.015973 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880349138f776ab openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.259878571 +0000 UTC m=+1.514181902,LastTimestamp:2025-12-11 16:00:52.259878571 +0000 UTC m=+1.514181902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.020874 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188034913a155301 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.278612737 +0000 UTC m=+1.532916068,LastTimestamp:2025-12-11 16:00:52.278612737 +0000 UTC m=+1.532916068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.025591 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034913a204c6d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.279331949 +0000 UTC m=+1.533635280,LastTimestamp:2025-12-11 16:00:52.279331949 +0000 UTC m=+1.533635280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.030444 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188034913ab0fe05 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.288814597 +0000 UTC m=+1.543117928,LastTimestamp:2025-12-11 16:00:52.288814597 +0000 UTC m=+1.543117928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.035068 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188034913b35b592 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.297512338 +0000 UTC m=+1.551815669,LastTimestamp:2025-12-11 16:00:52.297512338 +0000 UTC m=+1.551815669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.039629 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034913b8043f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.302398454 +0000 UTC m=+1.556701785,LastTimestamp:2025-12-11 16:00:52.302398454 +0000 UTC m=+1.556701785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.043838 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034913b93364c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.30364014 +0000 UTC m=+1.557943471,LastTimestamp:2025-12-11 16:00:52.30364014 +0000 UTC m=+1.557943471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.049519 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188034913c55cb5f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.316392287 +0000 UTC m=+1.570695618,LastTimestamp:2025-12-11 16:00:52.316392287 +0000 UTC m=+1.570695618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.054674 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188034914500ed13 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.461825299 +0000 UTC m=+1.716128630,LastTimestamp:2025-12-11 16:00:52.461825299 +0000 UTC m=+1.716128630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.059552 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18803491469ab4aa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.488680618 +0000 UTC m=+1.742983949,LastTimestamp:2025-12-11 16:00:52.488680618 +0000 UTC m=+1.742983949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.064291 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880349146ace03b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.489871419 +0000 UTC m=+1.744174750,LastTimestamp:2025-12-11 16:00:52.489871419 +0000 UTC m=+1.744174750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.069247 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1880349152502f09 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.685123337 +0000 UTC m=+1.939426668,LastTimestamp:2025-12-11 16:00:52.685123337 +0000 UTC m=+1.939426668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.073647 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18803491553f96f1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.734367473 +0000 UTC m=+1.988670804,LastTimestamp:2025-12-11 16:00:52.734367473 +0000 UTC m=+1.988670804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.078575 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880349155847c8a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.738882698 +0000 UTC m=+1.993186029,LastTimestamp:2025-12-11 16:00:52.738882698 +0000 UTC m=+1.993186029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.084612 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880349155d029f3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.743842291 +0000 UTC m=+1.998145622,LastTimestamp:2025-12-11 16:00:52.743842291 +0000 UTC m=+1.998145622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.089390 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880349156055381 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.747326337 +0000 UTC m=+2.001629668,LastTimestamp:2025-12-11 16:00:52.747326337 +0000 UTC m=+2.001629668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.093748 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034915616e121 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.748476705 +0000 UTC m=+2.002780036,LastTimestamp:2025-12-11 16:00:52.748476705 +0000 UTC m=+2.002780036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.100299 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18803491569e5428 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.757353512 +0000 UTC m=+2.011656843,LastTimestamp:2025-12-11 16:00:52.757353512 +0000 UTC m=+2.011656843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.101472 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880349156b892c6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.759073478 +0000 UTC m=+2.013376809,LastTimestamp:2025-12-11 16:00:52.759073478 +0000 UTC m=+2.013376809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: I1211 16:01:09.102150 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:09 crc kubenswrapper[5120]: I1211 16:01:09.103038 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:09 crc kubenswrapper[5120]: I1211 16:01:09.103093 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:09 crc kubenswrapper[5120]: I1211 16:01:09.103110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.103569 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.107680 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1880349162dcd39e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.962775966 +0000 UTC m=+2.217079287,LastTimestamp:2025-12-11 16:00:52.962775966 +0000 UTC m=+2.217079287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.112517 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880349162e9f276 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.96363583 +0000 UTC m=+2.217939161,LastTimestamp:2025-12-11 16:00:52.96363583 +0000 UTC m=+2.217939161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.118265 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188034916372aabf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.972595903 +0000 UTC m=+2.226899234,LastTimestamp:2025-12-11 16:00:52.972595903 +0000 UTC m=+2.226899234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.122981 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803491639a53a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.975195045 +0000 UTC m=+2.229498376,LastTimestamp:2025-12-11 16:00:52.975195045 +0000 UTC m=+2.229498376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.128917 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1880349163a78956 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:52.976060758 +0000 UTC m=+2.230364089,LastTimestamp:2025-12-11 16:00:52.976060758 +0000 UTC m=+2.230364089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.135547 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880349168930624 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.058602532 +0000 UTC m=+2.312905863,LastTimestamp:2025-12-11 16:00:53.058602532 +0000 UTC m=+2.312905863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.140571 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034916e5eeef3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.155852019 +0000 UTC m=+2.410155350,LastTimestamp:2025-12-11 16:00:53.155852019 +0000 UTC m=+2.410155350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.146350 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034916f225218 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.16865692 +0000 UTC m=+2.422960251,LastTimestamp:2025-12-11 16:00:53.16865692 +0000 UTC m=+2.422960251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.156860 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034916f2f87f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.169522674 +0000 UTC m=+2.423826015,LastTimestamp:2025-12-11 16:00:53.169522674 +0000 UTC m=+2.423826015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.162556 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880349173fd443d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.250114621 +0000 UTC m=+2.504417952,LastTimestamp:2025-12-11 16:00:53.250114621 +0000 UTC m=+2.504417952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.170941 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1880349174b07086 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.261856902 +0000 UTC m=+2.516160233,LastTimestamp:2025-12-11 16:00:53.261856902 +0000 UTC m=+2.516160233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.176420 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034917a7183d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.358396368 +0000 UTC m=+2.612699689,LastTimestamp:2025-12-11 16:00:53.358396368 +0000 UTC m=+2.612699689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.183261 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034917ae92438 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.366236216 +0000 UTC m=+2.620539557,LastTimestamp:2025-12-11 16:00:53.366236216 +0000 UTC m=+2.620539557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.190324 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.190571 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491a4daa2ae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.069928622 +0000 UTC m=+3.324231983,LastTimestamp:2025-12-11 16:00:54.069928622 +0000 UTC m=+3.324231983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.196996 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491b0616a0c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.26331086 +0000 UTC m=+3.517614191,LastTimestamp:2025-12-11 16:00:54.26331086 +0000 UTC m=+3.517614191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.202568 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491b0cb006d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.270230637 +0000 UTC m=+3.524533968,LastTimestamp:2025-12-11 16:00:54.270230637 +0000 UTC m=+3.524533968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.209240 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491b0d6aeb4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.270996148 +0000 UTC m=+3.525299479,LastTimestamp:2025-12-11 16:00:54.270996148 +0000 UTC m=+3.525299479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.214523 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491b9e837b1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.423140273 +0000 UTC m=+3.677443624,LastTimestamp:2025-12-11 16:00:54.423140273 +0000 UTC m=+3.677443624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.221091 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491ba72f90b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.432233739 +0000 UTC m=+3.686537070,LastTimestamp:2025-12-11 16:00:54.432233739 +0000 UTC m=+3.686537070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.227193 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491ba7ebb98 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.43300444 +0000 UTC m=+3.687307771,LastTimestamp:2025-12-11 16:00:54.43300444 +0000 UTC m=+3.687307771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.232119 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491c3f3be09 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.591667721 +0000 UTC m=+3.845971052,LastTimestamp:2025-12-11 16:00:54.591667721 +0000 UTC m=+3.845971052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.236750 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491c482e96b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.601050475 +0000 UTC m=+3.855353806,LastTimestamp:2025-12-11 16:00:54.601050475 +0000 UTC m=+3.855353806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.242554 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491c4933e3c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.602120764 +0000 UTC m=+3.856424095,LastTimestamp:2025-12-11 16:00:54.602120764 +0000 UTC m=+3.856424095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.247189 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491ccfe06d8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.743336664 +0000 UTC m=+3.997639995,LastTimestamp:2025-12-11 16:00:54.743336664 +0000 UTC m=+3.997639995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.252290 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491cd99aa76 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.75353663 +0000 UTC m=+4.007839961,LastTimestamp:2025-12-11 16:00:54.75353663 +0000 UTC m=+4.007839961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.258000 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491cdaa0997 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.754609559 +0000 UTC m=+4.008912890,LastTimestamp:2025-12-11 16:00:54.754609559 +0000 UTC m=+4.008912890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.262627 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491d6bd7d7e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.906879358 +0000 UTC m=+4.161182699,LastTimestamp:2025-12-11 16:00:54.906879358 +0000 UTC m=+4.161182699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.267854 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18803491d74002c7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:54.915433159 +0000 UTC m=+4.169736490,LastTimestamp:2025-12-11 16:00:54.915433159 +0000 UTC m=+4.169736490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.277682 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 11 16:01:09 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-controller-manager-crc.18803492a17cb713 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 11 16:01:09 crc kubenswrapper[5120]: body: Dec 11 16:01:09 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:58.308409107 +0000 UTC m=+7.562712448,LastTimestamp:2025-12-11 16:00:58.308409107 +0000 UTC m=+7.562712448,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:01:09 crc kubenswrapper[5120]: > Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.283758 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18803492a17dcc4e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:58.308480078 +0000 UTC m=+7.562783419,LastTimestamp:2025-12-11 16:00:58.308480078 +0000 UTC m=+7.562783419,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.293915 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:01:09 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.18803493e188792a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 11 16:01:09 crc kubenswrapper[5120]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 16:01:09 crc kubenswrapper[5120]: Dec 11 16:01:09 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:03.67788881 +0000 UTC m=+12.932192141,LastTimestamp:2025-12-11 16:01:03.67788881 +0000 UTC m=+12.932192141,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:01:09 crc kubenswrapper[5120]: > Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.298647 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803493e189bd78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:03.677971832 +0000 UTC m=+12.932275163,LastTimestamp:2025-12-11 16:01:03.677971832 +0000 UTC m=+12.932275163,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.304197 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:01:09 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.18803493edd5685b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Dec 11 16:01:09 crc kubenswrapper[5120]: body: [+]ping ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]log ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]etcd ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/priority-and-fairness-filter ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-apiextensions-informers ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-apiextensions-controllers ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/crd-informer-synced ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-system-namespaces-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 11 16:01:09 crc kubenswrapper[5120]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 11 16:01:09 crc kubenswrapper[5120]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/bootstrap-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/start-kube-aggregator-informers ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/apiservice-registration-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/apiservice-discovery-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]autoregister-completion ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/apiservice-openapi-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 11 16:01:09 crc kubenswrapper[5120]: livez check failed Dec 11 16:01:09 crc kubenswrapper[5120]: Dec 11 16:01:09 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:03.884257371 +0000 UTC m=+13.138560702,LastTimestamp:2025-12-11 16:01:03.884257371 +0000 UTC m=+13.138560702,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:01:09 crc kubenswrapper[5120]: > Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.312721 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803493edd66839 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:03.884322873 +0000 UTC m=+13.138626204,LastTimestamp:2025-12-11 16:01:03.884322873 +0000 UTC m=+13.138626204,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.317751 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:01:09 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.18803494b0cbd3a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 11 16:01:09 crc kubenswrapper[5120]: body: Dec 11 16:01:09 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:07.155186596 +0000 UTC m=+16.409489927,LastTimestamp:2025-12-11 16:01:07.155186596 +0000 UTC m=+16.409489927,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:01:09 crc kubenswrapper[5120]: > Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.325549 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803494b0ccfc7c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:07.155262588 +0000 UTC m=+16.409565919,LastTimestamp:2025-12-11 16:01:07.155262588 +0000 UTC m=+16.409565919,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.331177 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 11 16:01:09 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-controller-manager-crc.18803494f5856c5c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 11 16:01:09 crc kubenswrapper[5120]: body: Dec 11 16:01:09 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:08.30820054 +0000 UTC m=+17.562503871,LastTimestamp:2025-12-11 16:01:08.30820054 +0000 UTC m=+17.562503871,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:01:09 crc kubenswrapper[5120]: > Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.335389 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18803494f5865fc3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:08.308262851 +0000 UTC m=+17.562566182,LastTimestamp:2025-12-11 16:01:08.308262851 +0000 UTC m=+17.562566182,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: I1211 16:01:09.765791 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35684->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 11 16:01:09 crc kubenswrapper[5120]: I1211 16:01:09.765873 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35684->192.168.126.11:17697: read: connection reset by peer" Dec 11 16:01:09 crc kubenswrapper[5120]: I1211 16:01:09.766826 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 16:01:09 crc kubenswrapper[5120]: I1211 16:01:09.767807 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.772338 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:01:09 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188034954c673fd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:35684->192.168.126.11:17697: read: connection reset by peer Dec 11 16:01:09 crc kubenswrapper[5120]: body: Dec 11 16:01:09 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:09.765840853 +0000 UTC m=+19.020144184,LastTimestamp:2025-12-11 16:01:09.765840853 +0000 UTC m=+19.020144184,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:01:09 crc kubenswrapper[5120]: > Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.776225 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034954c681ba4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35684->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:09.765897124 +0000 UTC m=+19.020200455,LastTimestamp:2025-12-11 16:01:09.765897124 +0000 UTC m=+19.020200455,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.779942 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 11 16:01:09 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188034954c84ad45 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 11 16:01:09 crc kubenswrapper[5120]: body: Dec 11 16:01:09 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:09.767769413 +0000 UTC m=+19.022072764,LastTimestamp:2025-12-11 16:01:09.767769413 +0000 UTC m=+19.022072764,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 16:01:09 crc kubenswrapper[5120]: > Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.783796 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034954c869965 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:09.767895397 +0000 UTC m=+19.022198738,LastTimestamp:2025-12-11 16:01:09.767895397 +0000 UTC m=+19.022198738,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:09 crc kubenswrapper[5120]: I1211 16:01:09.949813 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:09 crc kubenswrapper[5120]: E1211 16:01:09.969598 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.090512 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.090732 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.091905 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.091938 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.091948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:10 crc kubenswrapper[5120]: E1211 16:01:10.092223 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.105264 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.106745 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6e582719820a7b943cc30e7e5b8112d7aa05f8d57fc6dda984ecfe6b833ef3e3" exitCode=255 Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.106812 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"6e582719820a7b943cc30e7e5b8112d7aa05f8d57fc6dda984ecfe6b833ef3e3"} Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.107009 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.107568 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.107663 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.107749 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:10 crc kubenswrapper[5120]: E1211 16:01:10.108167 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.108447 5120 scope.go:117] "RemoveContainer" containerID="6e582719820a7b943cc30e7e5b8112d7aa05f8d57fc6dda984ecfe6b833ef3e3" Dec 11 16:01:10 crc kubenswrapper[5120]: E1211 16:01:10.117144 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188034916f2f87f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034916f2f87f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.169522674 +0000 UTC m=+2.423826015,LastTimestamp:2025-12-11 16:01:10.109406199 +0000 UTC m=+19.363709530,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:10 crc kubenswrapper[5120]: E1211 16:01:10.506495 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188034917a7183d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034917a7183d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.358396368 +0000 UTC m=+2.612699689,LastTimestamp:2025-12-11 16:01:10.501119122 +0000 UTC m=+19.755422453,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:10 crc kubenswrapper[5120]: E1211 16:01:10.517317 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188034917ae92438\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034917ae92438 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.366236216 +0000 UTC m=+2.620539557,LastTimestamp:2025-12-11 16:01:10.51289968 +0000 UTC m=+19.767203011,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:10 crc kubenswrapper[5120]: I1211 16:01:10.950441 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:11 crc kubenswrapper[5120]: E1211 16:01:11.064910 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:01:11 crc kubenswrapper[5120]: I1211 16:01:11.111981 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 11 16:01:11 crc kubenswrapper[5120]: I1211 16:01:11.113906 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"75f8edef607aa7a0e66249dd37dd7b9be60fded7323f4314ebb86bd6d7c72b1c"} Dec 11 16:01:11 crc kubenswrapper[5120]: I1211 16:01:11.114091 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:11 crc kubenswrapper[5120]: I1211 16:01:11.114648 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:11 crc kubenswrapper[5120]: I1211 16:01:11.114676 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:11 crc kubenswrapper[5120]: I1211 16:01:11.114685 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:11 crc kubenswrapper[5120]: E1211 16:01:11.114966 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:11 crc kubenswrapper[5120]: I1211 16:01:11.948717 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.118231 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.120712 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.122767 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="75f8edef607aa7a0e66249dd37dd7b9be60fded7323f4314ebb86bd6d7c72b1c" exitCode=255 Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.122809 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"75f8edef607aa7a0e66249dd37dd7b9be60fded7323f4314ebb86bd6d7c72b1c"} Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.122895 5120 scope.go:117] "RemoveContainer" containerID="6e582719820a7b943cc30e7e5b8112d7aa05f8d57fc6dda984ecfe6b833ef3e3" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.123109 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.123662 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.123700 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.123710 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:12 crc kubenswrapper[5120]: E1211 16:01:12.124090 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.124384 5120 scope.go:117] "RemoveContainer" containerID="75f8edef607aa7a0e66249dd37dd7b9be60fded7323f4314ebb86bd6d7c72b1c" Dec 11 16:01:12 crc kubenswrapper[5120]: E1211 16:01:12.125225 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:01:12 crc kubenswrapper[5120]: E1211 16:01:12.129849 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803495d907f670 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:12.1251836 +0000 UTC m=+21.379486931,LastTimestamp:2025-12-11 16:01:12.1251836 +0000 UTC m=+21.379486931,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.391366 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.392116 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.392173 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.392187 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.392211 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:01:12 crc kubenswrapper[5120]: E1211 16:01:12.400631 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:01:12 crc kubenswrapper[5120]: E1211 16:01:12.848755 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:01:12 crc kubenswrapper[5120]: I1211 16:01:12.949014 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:13 crc kubenswrapper[5120]: I1211 16:01:13.126238 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 11 16:01:13 crc kubenswrapper[5120]: E1211 16:01:13.160070 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:01:13 crc kubenswrapper[5120]: E1211 16:01:13.576463 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:01:13 crc kubenswrapper[5120]: I1211 16:01:13.807825 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 11 16:01:13 crc kubenswrapper[5120]: I1211 16:01:13.808102 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:13 crc kubenswrapper[5120]: I1211 16:01:13.809065 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:13 crc kubenswrapper[5120]: I1211 16:01:13.809096 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:13 crc kubenswrapper[5120]: I1211 16:01:13.809105 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:13 crc kubenswrapper[5120]: E1211 16:01:13.809446 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:13 crc kubenswrapper[5120]: I1211 16:01:13.817467 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 11 16:01:13 crc kubenswrapper[5120]: I1211 16:01:13.949457 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:14 crc kubenswrapper[5120]: I1211 16:01:14.130623 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:14 crc kubenswrapper[5120]: I1211 16:01:14.131226 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:14 crc kubenswrapper[5120]: I1211 16:01:14.131291 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:14 crc kubenswrapper[5120]: I1211 16:01:14.131303 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:14 crc kubenswrapper[5120]: E1211 16:01:14.131662 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:14 crc kubenswrapper[5120]: E1211 16:01:14.433337 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:01:14 crc kubenswrapper[5120]: I1211 16:01:14.951107 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:15 crc kubenswrapper[5120]: I1211 16:01:15.312367 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:01:15 crc kubenswrapper[5120]: I1211 16:01:15.312609 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:15 crc kubenswrapper[5120]: I1211 16:01:15.313664 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:15 crc kubenswrapper[5120]: I1211 16:01:15.313708 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:15 crc kubenswrapper[5120]: I1211 16:01:15.313719 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:15 crc kubenswrapper[5120]: E1211 16:01:15.314022 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:15 crc kubenswrapper[5120]: I1211 16:01:15.316639 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:01:15 crc kubenswrapper[5120]: I1211 16:01:15.948578 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:16 crc kubenswrapper[5120]: I1211 16:01:16.135352 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:16 crc kubenswrapper[5120]: I1211 16:01:16.135842 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:16 crc kubenswrapper[5120]: I1211 16:01:16.135878 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:16 crc kubenswrapper[5120]: I1211 16:01:16.135891 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:16 crc kubenswrapper[5120]: E1211 16:01:16.136220 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:16 crc kubenswrapper[5120]: E1211 16:01:16.810504 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:01:16 crc kubenswrapper[5120]: I1211 16:01:16.950204 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:17 crc kubenswrapper[5120]: I1211 16:01:17.154891 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:01:17 crc kubenswrapper[5120]: I1211 16:01:17.155109 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:17 crc kubenswrapper[5120]: I1211 16:01:17.155877 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:17 crc kubenswrapper[5120]: I1211 16:01:17.155923 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:17 crc kubenswrapper[5120]: I1211 16:01:17.155968 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:17 crc kubenswrapper[5120]: E1211 16:01:17.156313 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:17 crc kubenswrapper[5120]: I1211 16:01:17.156559 5120 scope.go:117] "RemoveContainer" containerID="75f8edef607aa7a0e66249dd37dd7b9be60fded7323f4314ebb86bd6d7c72b1c" Dec 11 16:01:17 crc kubenswrapper[5120]: E1211 16:01:17.156750 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:01:17 crc kubenswrapper[5120]: E1211 16:01:17.161354 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18803495d907f670\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803495d907f670 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:12.1251836 +0000 UTC m=+21.379486931,LastTimestamp:2025-12-11 16:01:17.156718388 +0000 UTC m=+26.411021719,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:17 crc kubenswrapper[5120]: I1211 16:01:17.950497 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:18 crc kubenswrapper[5120]: I1211 16:01:18.801698 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:18 crc kubenswrapper[5120]: I1211 16:01:18.802649 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:18 crc kubenswrapper[5120]: I1211 16:01:18.802695 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:18 crc kubenswrapper[5120]: I1211 16:01:18.802710 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:18 crc kubenswrapper[5120]: I1211 16:01:18.802738 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:01:18 crc kubenswrapper[5120]: E1211 16:01:18.812350 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:01:18 crc kubenswrapper[5120]: I1211 16:01:18.951803 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:19 crc kubenswrapper[5120]: I1211 16:01:19.950869 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:20 crc kubenswrapper[5120]: E1211 16:01:20.585520 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:01:20 crc kubenswrapper[5120]: I1211 16:01:20.949644 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:21 crc kubenswrapper[5120]: E1211 16:01:21.065875 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:01:21 crc kubenswrapper[5120]: I1211 16:01:21.115049 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:01:21 crc kubenswrapper[5120]: I1211 16:01:21.115299 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:21 crc kubenswrapper[5120]: I1211 16:01:21.116023 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:21 crc kubenswrapper[5120]: I1211 16:01:21.116068 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:21 crc kubenswrapper[5120]: I1211 16:01:21.116080 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:21 crc kubenswrapper[5120]: E1211 16:01:21.116504 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:21 crc kubenswrapper[5120]: I1211 16:01:21.116796 5120 scope.go:117] "RemoveContainer" containerID="75f8edef607aa7a0e66249dd37dd7b9be60fded7323f4314ebb86bd6d7c72b1c" Dec 11 16:01:21 crc kubenswrapper[5120]: E1211 16:01:21.117037 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:01:21 crc kubenswrapper[5120]: E1211 16:01:21.120975 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18803495d907f670\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803495d907f670 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:12.1251836 +0000 UTC m=+21.379486931,LastTimestamp:2025-12-11 16:01:21.116999467 +0000 UTC m=+30.371302798,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:21 crc kubenswrapper[5120]: I1211 16:01:21.949720 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:22 crc kubenswrapper[5120]: I1211 16:01:22.951058 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:23 crc kubenswrapper[5120]: I1211 16:01:23.947575 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:24 crc kubenswrapper[5120]: E1211 16:01:24.120700 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:01:24 crc kubenswrapper[5120]: I1211 16:01:24.954372 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:25 crc kubenswrapper[5120]: E1211 16:01:25.069577 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:01:25 crc kubenswrapper[5120]: E1211 16:01:25.705015 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:01:25 crc kubenswrapper[5120]: I1211 16:01:25.812525 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:25 crc kubenswrapper[5120]: I1211 16:01:25.813785 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:25 crc kubenswrapper[5120]: I1211 16:01:25.813818 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:25 crc kubenswrapper[5120]: I1211 16:01:25.813827 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:25 crc kubenswrapper[5120]: I1211 16:01:25.813846 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:01:25 crc kubenswrapper[5120]: E1211 16:01:25.828382 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:01:25 crc kubenswrapper[5120]: I1211 16:01:25.952821 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:26 crc kubenswrapper[5120]: I1211 16:01:26.951449 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:27 crc kubenswrapper[5120]: E1211 16:01:27.595872 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:01:27 crc kubenswrapper[5120]: I1211 16:01:27.951192 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:28 crc kubenswrapper[5120]: I1211 16:01:28.953435 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:29 crc kubenswrapper[5120]: I1211 16:01:29.950581 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:30 crc kubenswrapper[5120]: I1211 16:01:30.954547 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:31 crc kubenswrapper[5120]: E1211 16:01:31.066444 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:01:31 crc kubenswrapper[5120]: I1211 16:01:31.953905 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:32 crc kubenswrapper[5120]: I1211 16:01:32.828732 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:32 crc kubenswrapper[5120]: I1211 16:01:32.830418 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:32 crc kubenswrapper[5120]: I1211 16:01:32.830460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:32 crc kubenswrapper[5120]: I1211 16:01:32.830471 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:32 crc kubenswrapper[5120]: I1211 16:01:32.830501 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:01:32 crc kubenswrapper[5120]: E1211 16:01:32.843818 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:01:32 crc kubenswrapper[5120]: I1211 16:01:32.954850 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:33 crc kubenswrapper[5120]: I1211 16:01:33.947617 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:34 crc kubenswrapper[5120]: I1211 16:01:34.022245 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:34 crc kubenswrapper[5120]: I1211 16:01:34.023868 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:34 crc kubenswrapper[5120]: I1211 16:01:34.023940 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:34 crc kubenswrapper[5120]: I1211 16:01:34.023968 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:34 crc kubenswrapper[5120]: E1211 16:01:34.024764 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:34 crc kubenswrapper[5120]: I1211 16:01:34.025370 5120 scope.go:117] "RemoveContainer" containerID="75f8edef607aa7a0e66249dd37dd7b9be60fded7323f4314ebb86bd6d7c72b1c" Dec 11 16:01:34 crc kubenswrapper[5120]: E1211 16:01:34.036690 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188034916f2f87f2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034916f2f87f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.169522674 +0000 UTC m=+2.423826015,LastTimestamp:2025-12-11 16:01:34.027568912 +0000 UTC m=+43.281872283,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:34 crc kubenswrapper[5120]: E1211 16:01:34.244529 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188034917a7183d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034917a7183d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.358396368 +0000 UTC m=+2.612699689,LastTimestamp:2025-12-11 16:01:34.238684582 +0000 UTC m=+43.492987913,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:34 crc kubenswrapper[5120]: E1211 16:01:34.256954 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188034917ae92438\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188034917ae92438 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:00:53.366236216 +0000 UTC m=+2.620539557,LastTimestamp:2025-12-11 16:01:34.250742937 +0000 UTC m=+43.505046268,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:34 crc kubenswrapper[5120]: E1211 16:01:34.602107 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:01:34 crc kubenswrapper[5120]: I1211 16:01:34.949521 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:35 crc kubenswrapper[5120]: I1211 16:01:35.188777 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 11 16:01:35 crc kubenswrapper[5120]: I1211 16:01:35.190301 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"91dbc4d01010eb3a9bd3237f58d5cf45c9df2e3b0db5f040f9d97bb3389bff3e"} Dec 11 16:01:35 crc kubenswrapper[5120]: I1211 16:01:35.190560 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:35 crc kubenswrapper[5120]: I1211 16:01:35.191240 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:35 crc kubenswrapper[5120]: I1211 16:01:35.191331 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:35 crc kubenswrapper[5120]: I1211 16:01:35.191345 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:35 crc kubenswrapper[5120]: E1211 16:01:35.191786 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:35 crc kubenswrapper[5120]: I1211 16:01:35.951968 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.195281 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.195902 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.198314 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="91dbc4d01010eb3a9bd3237f58d5cf45c9df2e3b0db5f040f9d97bb3389bff3e" exitCode=255 Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.198389 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"91dbc4d01010eb3a9bd3237f58d5cf45c9df2e3b0db5f040f9d97bb3389bff3e"} Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.198451 5120 scope.go:117] "RemoveContainer" containerID="75f8edef607aa7a0e66249dd37dd7b9be60fded7323f4314ebb86bd6d7c72b1c" Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.198763 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.199698 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.199734 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.199745 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:36 crc kubenswrapper[5120]: E1211 16:01:36.200131 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.200474 5120 scope.go:117] "RemoveContainer" containerID="91dbc4d01010eb3a9bd3237f58d5cf45c9df2e3b0db5f040f9d97bb3389bff3e" Dec 11 16:01:36 crc kubenswrapper[5120]: E1211 16:01:36.200716 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:01:36 crc kubenswrapper[5120]: E1211 16:01:36.205653 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18803495d907f670\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803495d907f670 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:12.1251836 +0000 UTC m=+21.379486931,LastTimestamp:2025-12-11 16:01:36.200678287 +0000 UTC m=+45.454981618,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:36 crc kubenswrapper[5120]: I1211 16:01:36.951140 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:37 crc kubenswrapper[5120]: I1211 16:01:37.155030 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:01:37 crc kubenswrapper[5120]: I1211 16:01:37.205738 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 11 16:01:37 crc kubenswrapper[5120]: I1211 16:01:37.208720 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:37 crc kubenswrapper[5120]: I1211 16:01:37.209880 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:37 crc kubenswrapper[5120]: I1211 16:01:37.209944 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:37 crc kubenswrapper[5120]: I1211 16:01:37.210002 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:37 crc kubenswrapper[5120]: E1211 16:01:37.210669 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:37 crc kubenswrapper[5120]: I1211 16:01:37.211249 5120 scope.go:117] "RemoveContainer" containerID="91dbc4d01010eb3a9bd3237f58d5cf45c9df2e3b0db5f040f9d97bb3389bff3e" Dec 11 16:01:37 crc kubenswrapper[5120]: E1211 16:01:37.211646 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:01:37 crc kubenswrapper[5120]: E1211 16:01:37.219791 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18803495d907f670\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803495d907f670 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:12.1251836 +0000 UTC m=+21.379486931,LastTimestamp:2025-12-11 16:01:37.211582281 +0000 UTC m=+46.465885642,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:37 crc kubenswrapper[5120]: I1211 16:01:37.950741 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:38 crc kubenswrapper[5120]: E1211 16:01:38.201832 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 11 16:01:38 crc kubenswrapper[5120]: E1211 16:01:38.533922 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 11 16:01:38 crc kubenswrapper[5120]: I1211 16:01:38.953527 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:39 crc kubenswrapper[5120]: I1211 16:01:39.844910 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:39 crc kubenswrapper[5120]: I1211 16:01:39.846004 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:39 crc kubenswrapper[5120]: I1211 16:01:39.846050 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:39 crc kubenswrapper[5120]: I1211 16:01:39.846064 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:39 crc kubenswrapper[5120]: I1211 16:01:39.846092 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:01:39 crc kubenswrapper[5120]: E1211 16:01:39.855332 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:01:39 crc kubenswrapper[5120]: I1211 16:01:39.947426 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:40 crc kubenswrapper[5120]: I1211 16:01:40.950787 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:41 crc kubenswrapper[5120]: E1211 16:01:41.067522 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:01:41 crc kubenswrapper[5120]: E1211 16:01:41.607270 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:01:41 crc kubenswrapper[5120]: I1211 16:01:41.950847 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:42 crc kubenswrapper[5120]: I1211 16:01:42.953617 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:43 crc kubenswrapper[5120]: I1211 16:01:43.949819 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:44 crc kubenswrapper[5120]: I1211 16:01:44.301707 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:01:44 crc kubenswrapper[5120]: I1211 16:01:44.301907 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:44 crc kubenswrapper[5120]: I1211 16:01:44.302884 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:44 crc kubenswrapper[5120]: I1211 16:01:44.302931 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:44 crc kubenswrapper[5120]: I1211 16:01:44.302943 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:44 crc kubenswrapper[5120]: E1211 16:01:44.303269 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:44 crc kubenswrapper[5120]: I1211 16:01:44.949331 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:45 crc kubenswrapper[5120]: I1211 16:01:45.191462 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:01:45 crc kubenswrapper[5120]: I1211 16:01:45.191710 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:45 crc kubenswrapper[5120]: I1211 16:01:45.192661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:45 crc kubenswrapper[5120]: I1211 16:01:45.192708 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:45 crc kubenswrapper[5120]: I1211 16:01:45.192719 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:45 crc kubenswrapper[5120]: E1211 16:01:45.193097 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:45 crc kubenswrapper[5120]: I1211 16:01:45.193346 5120 scope.go:117] "RemoveContainer" containerID="91dbc4d01010eb3a9bd3237f58d5cf45c9df2e3b0db5f040f9d97bb3389bff3e" Dec 11 16:01:45 crc kubenswrapper[5120]: E1211 16:01:45.193542 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:01:45 crc kubenswrapper[5120]: E1211 16:01:45.197939 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18803495d907f670\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18803495d907f670 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:01:12.1251836 +0000 UTC m=+21.379486931,LastTimestamp:2025-12-11 16:01:45.193495413 +0000 UTC m=+54.447798744,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:01:45 crc kubenswrapper[5120]: I1211 16:01:45.950603 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:46 crc kubenswrapper[5120]: I1211 16:01:46.855881 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:46 crc kubenswrapper[5120]: I1211 16:01:46.857044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:46 crc kubenswrapper[5120]: I1211 16:01:46.857074 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:46 crc kubenswrapper[5120]: I1211 16:01:46.857085 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:46 crc kubenswrapper[5120]: I1211 16:01:46.857105 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:01:46 crc kubenswrapper[5120]: E1211 16:01:46.867135 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:01:46 crc kubenswrapper[5120]: I1211 16:01:46.948987 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:48 crc kubenswrapper[5120]: E1211 16:01:48.290833 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 11 16:01:48 crc kubenswrapper[5120]: E1211 16:01:48.290847 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 11 16:01:48 crc kubenswrapper[5120]: I1211 16:01:48.291045 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:48 crc kubenswrapper[5120]: E1211 16:01:48.614934 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 11 16:01:48 crc kubenswrapper[5120]: I1211 16:01:48.951104 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:49 crc kubenswrapper[5120]: I1211 16:01:49.949569 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:50 crc kubenswrapper[5120]: I1211 16:01:50.950534 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:51 crc kubenswrapper[5120]: E1211 16:01:51.069547 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:01:51 crc kubenswrapper[5120]: I1211 16:01:51.950397 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:52 crc kubenswrapper[5120]: I1211 16:01:52.949721 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:53 crc kubenswrapper[5120]: I1211 16:01:53.867264 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:53 crc kubenswrapper[5120]: I1211 16:01:53.868682 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:53 crc kubenswrapper[5120]: I1211 16:01:53.868760 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:53 crc kubenswrapper[5120]: I1211 16:01:53.868783 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:53 crc kubenswrapper[5120]: I1211 16:01:53.868822 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:01:53 crc kubenswrapper[5120]: E1211 16:01:53.881099 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 11 16:01:53 crc kubenswrapper[5120]: I1211 16:01:53.951267 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:54 crc kubenswrapper[5120]: I1211 16:01:54.949285 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 11 16:01:55 crc kubenswrapper[5120]: I1211 16:01:55.450554 5120 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-7brsj" Dec 11 16:01:55 crc kubenswrapper[5120]: I1211 16:01:55.455676 5120 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-7brsj" Dec 11 16:01:55 crc kubenswrapper[5120]: I1211 16:01:55.503110 5120 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 11 16:01:55 crc kubenswrapper[5120]: I1211 16:01:55.862640 5120 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 11 16:01:56 crc kubenswrapper[5120]: I1211 16:01:56.457033 5120 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-10 15:56:55 +0000 UTC" deadline="2026-01-03 04:41:58.849356315 +0000 UTC" Dec 11 16:01:56 crc kubenswrapper[5120]: I1211 16:01:56.457076 5120 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="540h40m2.392283024s" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.021640 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.022927 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.023001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.023020 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:58 crc kubenswrapper[5120]: E1211 16:01:58.024238 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.024805 5120 scope.go:117] "RemoveContainer" containerID="91dbc4d01010eb3a9bd3237f58d5cf45c9df2e3b0db5f040f9d97bb3389bff3e" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.261821 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.263224 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad"} Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.263454 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.263985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.264030 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:01:58 crc kubenswrapper[5120]: I1211 16:01:58.264040 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:01:58 crc kubenswrapper[5120]: E1211 16:01:58.264440 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.269058 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.269574 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.270951 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad" exitCode=255 Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.270987 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad"} Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.271017 5120 scope.go:117] "RemoveContainer" containerID="91dbc4d01010eb3a9bd3237f58d5cf45c9df2e3b0db5f040f9d97bb3389bff3e" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.271168 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.271709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.271755 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.271775 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:00 crc kubenswrapper[5120]: E1211 16:02:00.272144 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.272356 5120 scope.go:117] "RemoveContainer" containerID="7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad" Dec 11 16:02:00 crc kubenswrapper[5120]: E1211 16:02:00.272550 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.881620 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.882463 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.882512 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.882524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.882645 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.890544 5120 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.890878 5120 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 11 16:02:00 crc kubenswrapper[5120]: E1211 16:02:00.890907 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.893626 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.893651 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.893660 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.893674 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.893685 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:00Z","lastTransitionTime":"2025-12-11T16:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:00 crc kubenswrapper[5120]: E1211 16:02:00.904295 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.910194 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.910228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.910240 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.910253 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.910263 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:00Z","lastTransitionTime":"2025-12-11T16:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:00 crc kubenswrapper[5120]: E1211 16:02:00.920019 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.926780 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.926833 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.926844 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.926862 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.926874 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:00Z","lastTransitionTime":"2025-12-11T16:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:00 crc kubenswrapper[5120]: E1211 16:02:00.935590 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.944388 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.944522 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.944593 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.944657 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:00 crc kubenswrapper[5120]: I1211 16:02:00.944727 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:00Z","lastTransitionTime":"2025-12-11T16:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:00 crc kubenswrapper[5120]: E1211 16:02:00.953609 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:00 crc kubenswrapper[5120]: E1211 16:02:00.953890 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 11 16:02:00 crc kubenswrapper[5120]: E1211 16:02:00.953969 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.054099 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.069847 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.154878 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.255282 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: I1211 16:02:01.275579 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.356438 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.456774 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.557816 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.657895 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.758852 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.859384 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:01 crc kubenswrapper[5120]: E1211 16:02:01.959886 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.060979 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.161412 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.261705 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.362576 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.462973 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.563910 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.664726 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.765612 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.866607 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:02 crc kubenswrapper[5120]: E1211 16:02:02.967211 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.067947 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.168100 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.269063 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.369861 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.470588 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.570967 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.674748 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.775735 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.876889 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:03 crc kubenswrapper[5120]: E1211 16:02:03.977998 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.078724 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.179750 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.280105 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.380298 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.480693 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.581276 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.681590 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.782648 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.883387 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:04 crc kubenswrapper[5120]: E1211 16:02:04.984002 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.084380 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.184676 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.285089 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.385632 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.486298 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.586883 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.687510 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.788290 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.889446 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:05 crc kubenswrapper[5120]: E1211 16:02:05.990280 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.091126 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.191978 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.292103 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.392929 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.494007 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.594675 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.695015 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.795205 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.896284 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:06 crc kubenswrapper[5120]: E1211 16:02:06.997070 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.097368 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:07 crc kubenswrapper[5120]: I1211 16:02:07.154794 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:02:07 crc kubenswrapper[5120]: I1211 16:02:07.155237 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:02:07 crc kubenswrapper[5120]: I1211 16:02:07.156559 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:07 crc kubenswrapper[5120]: I1211 16:02:07.156691 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:07 crc kubenswrapper[5120]: I1211 16:02:07.156738 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.157936 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:02:07 crc kubenswrapper[5120]: I1211 16:02:07.158488 5120 scope.go:117] "RemoveContainer" containerID="7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.158970 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.197878 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.299014 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.399825 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.500164 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.600598 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.701248 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.802048 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:07 crc kubenswrapper[5120]: E1211 16:02:07.902871 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.003785 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.104415 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.205291 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: I1211 16:02:08.264326 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:02:08 crc kubenswrapper[5120]: I1211 16:02:08.264579 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:02:08 crc kubenswrapper[5120]: I1211 16:02:08.265498 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:08 crc kubenswrapper[5120]: I1211 16:02:08.265539 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:08 crc kubenswrapper[5120]: I1211 16:02:08.265559 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.266179 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:02:08 crc kubenswrapper[5120]: I1211 16:02:08.266496 5120 scope.go:117] "RemoveContainer" containerID="7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.266781 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.306493 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.407299 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.508188 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.609131 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.709564 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.810228 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:08 crc kubenswrapper[5120]: E1211 16:02:08.911193 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.012236 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: I1211 16:02:09.021570 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:02:09 crc kubenswrapper[5120]: I1211 16:02:09.022803 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:09 crc kubenswrapper[5120]: I1211 16:02:09.022911 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:09 crc kubenswrapper[5120]: I1211 16:02:09.022945 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.023591 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.113314 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.213409 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.313517 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.413759 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.514383 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.614979 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.715820 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.817165 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:09 crc kubenswrapper[5120]: E1211 16:02:09.918256 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.019405 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.119806 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.221062 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.321442 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.421889 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.522883 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.623296 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.724328 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.825481 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:10 crc kubenswrapper[5120]: E1211 16:02:10.926636 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.027644 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.070670 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.128383 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.197296 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.200526 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.200566 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.200579 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.200597 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.200609 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:11Z","lastTransitionTime":"2025-12-11T16:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.209371 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.216242 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.216288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.216299 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.216313 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.216322 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:11Z","lastTransitionTime":"2025-12-11T16:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.226555 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.232661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.232731 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.232746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.232765 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.232778 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:11Z","lastTransitionTime":"2025-12-11T16:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.242310 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.249390 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.249440 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.249456 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.249473 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:11 crc kubenswrapper[5120]: I1211 16:02:11.249486 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:11Z","lastTransitionTime":"2025-12-11T16:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.260147 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.260351 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.260383 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.361207 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.462300 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.562830 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.663377 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.764469 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.864802 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:11 crc kubenswrapper[5120]: E1211 16:02:11.965562 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.066629 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.167372 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.268246 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.368417 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.469526 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.570018 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.670650 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: I1211 16:02:12.753334 5120 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.771217 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.871656 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:12 crc kubenswrapper[5120]: E1211 16:02:12.972707 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.074202 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.174940 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.275974 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.376405 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.477203 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.578726 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.679308 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.780132 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.881251 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:13 crc kubenswrapper[5120]: E1211 16:02:13.981952 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.082284 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.182866 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.283373 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.384090 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.485117 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: I1211 16:02:14.547943 5120 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.585558 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.686032 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.786794 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.887292 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:14 crc kubenswrapper[5120]: E1211 16:02:14.988253 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.088873 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.189129 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.289591 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.389953 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.490230 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.590673 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.691106 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.791353 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.891510 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:15 crc kubenswrapper[5120]: E1211 16:02:15.992370 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.092914 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.193967 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.294404 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.394790 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.495570 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.596252 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.696431 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.796616 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.897548 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:16 crc kubenswrapper[5120]: E1211 16:02:16.998234 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:17 crc kubenswrapper[5120]: I1211 16:02:17.021775 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:02:17 crc kubenswrapper[5120]: I1211 16:02:17.023447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:17 crc kubenswrapper[5120]: I1211 16:02:17.023513 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:17 crc kubenswrapper[5120]: I1211 16:02:17.023532 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.024363 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.098928 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.199062 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.300218 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.400549 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.501710 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.601874 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.702957 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.803989 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:17 crc kubenswrapper[5120]: E1211 16:02:17.905288 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.006368 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.106880 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.207668 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.308704 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.409295 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.510388 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.611312 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.712476 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.813302 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:18 crc kubenswrapper[5120]: E1211 16:02:18.913965 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.014415 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.114659 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.215790 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.316638 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.417692 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.518763 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.619677 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.720731 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.820852 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:19 crc kubenswrapper[5120]: E1211 16:02:19.921029 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.021510 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.121991 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.222070 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.322737 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.423727 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.523944 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.624105 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.725041 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.826068 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:20 crc kubenswrapper[5120]: E1211 16:02:20.927025 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.027580 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.071482 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.128173 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.228647 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.329312 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.420372 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.424251 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.424299 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.424309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.424322 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.424332 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:21Z","lastTransitionTime":"2025-12-11T16:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.437248 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.440624 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.440694 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.440735 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.440763 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.440782 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:21Z","lastTransitionTime":"2025-12-11T16:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.449070 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.451844 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.451878 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.451888 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.451900 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.451909 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:21Z","lastTransitionTime":"2025-12-11T16:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.461614 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.465275 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.465307 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.465316 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.465327 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:21 crc kubenswrapper[5120]: I1211 16:02:21.465336 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:21Z","lastTransitionTime":"2025-12-11T16:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.474928 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"93660043-3b1f-49ac-bde1-adfbb3f6633e\\\",\\\"systemUUID\\\":\\\"07ea2ba6-937b-4347-9d9b-4ade3aaec959\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.475095 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.475141 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.575792 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.676112 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.777024 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.877315 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:21 crc kubenswrapper[5120]: E1211 16:02:21.977960 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: I1211 16:02:22.021005 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:02:22 crc kubenswrapper[5120]: I1211 16:02:22.022024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:22 crc kubenswrapper[5120]: I1211 16:02:22.022071 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:22 crc kubenswrapper[5120]: I1211 16:02:22.022084 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.022617 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:02:22 crc kubenswrapper[5120]: I1211 16:02:22.022916 5120 scope.go:117] "RemoveContainer" containerID="7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.023193 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.078264 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.178610 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.279650 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.379826 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.479958 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.581231 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.682388 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.782577 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.882662 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:22 crc kubenswrapper[5120]: E1211 16:02:22.983626 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: I1211 16:02:23.021562 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 11 16:02:23 crc kubenswrapper[5120]: I1211 16:02:23.022985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:23 crc kubenswrapper[5120]: I1211 16:02:23.023036 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:23 crc kubenswrapper[5120]: I1211 16:02:23.023054 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.023452 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.083949 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.184681 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.285686 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.386344 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.486623 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.587478 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.688105 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.788524 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.888633 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:23 crc kubenswrapper[5120]: E1211 16:02:23.988862 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.089455 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.189789 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.290745 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: I1211 16:02:24.385052 5120 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.391331 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.491556 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.592369 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.693514 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.793667 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.894231 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:24 crc kubenswrapper[5120]: E1211 16:02:24.995097 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:25 crc kubenswrapper[5120]: E1211 16:02:25.096255 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:25 crc kubenswrapper[5120]: E1211 16:02:25.197323 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:25 crc kubenswrapper[5120]: E1211 16:02:25.298454 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:25 crc kubenswrapper[5120]: E1211 16:02:25.399237 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:25 crc kubenswrapper[5120]: E1211 16:02:25.500085 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:25 crc kubenswrapper[5120]: E1211 16:02:25.600556 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:25 crc kubenswrapper[5120]: E1211 16:02:25.701577 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:25 crc kubenswrapper[5120]: E1211 16:02:25.802531 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:25 crc kubenswrapper[5120]: E1211 16:02:25.903602 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.004583 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.105653 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.206649 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.307558 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.408181 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.509382 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.610257 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.711118 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.812284 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:26 crc kubenswrapper[5120]: E1211 16:02:26.912744 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:27 crc kubenswrapper[5120]: E1211 16:02:27.012930 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:27 crc kubenswrapper[5120]: E1211 16:02:27.113652 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:27 crc kubenswrapper[5120]: E1211 16:02:27.214382 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:27 crc kubenswrapper[5120]: E1211 16:02:27.314877 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:27 crc kubenswrapper[5120]: E1211 16:02:27.415249 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:27 crc kubenswrapper[5120]: E1211 16:02:27.515801 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.591932 5120 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.618194 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.618238 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.618249 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.618263 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.618272 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:27Z","lastTransitionTime":"2025-12-11T16:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.659766 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.667318 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.720447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.720526 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.720545 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.720567 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.720587 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:27Z","lastTransitionTime":"2025-12-11T16:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.769312 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.823085 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.823124 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.823134 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.823162 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.823171 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:27Z","lastTransitionTime":"2025-12-11T16:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.869918 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.925946 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.925999 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.926013 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.926030 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.926042 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:27Z","lastTransitionTime":"2025-12-11T16:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:27 crc kubenswrapper[5120]: I1211 16:02:27.970070 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.027863 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.027928 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.027964 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.027982 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.028013 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.130127 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.130199 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.130209 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.130222 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.130231 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.232208 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.232272 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.232292 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.232319 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.232336 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.313348 5120 apiserver.go:52] "Watching apiserver" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.320177 5120 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.320706 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-bxt85","openshift-image-registry/node-ca-ddlz4","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/multus-qzwn6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br","openshift-dns/node-resolver-djrpd","openshift-etcd/etcd-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-fpg9g","openshift-multus/network-metrics-daemon-ccl9q","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-multus/multus-additional-cni-plugins-xmwrh"] Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.321643 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.326478 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.326577 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.326620 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.327371 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.327375 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.327958 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.328030 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.328226 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.328296 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.328895 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.328970 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.329726 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.330389 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.330911 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.330941 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.331034 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.331263 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.333897 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.333925 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.333934 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.333950 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.333959 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.340190 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.345833 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.345916 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ccl9q" podUID="f1d42362-2047-47d8-b096-bd9f85606eeb" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.349863 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.351188 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.352632 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.353028 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.353741 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.353789 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.353926 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.356701 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.358025 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.358402 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.358705 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.358710 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.359462 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.360769 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.360889 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.360936 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.361098 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.361119 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.361223 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.362295 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.363367 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.366102 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.366631 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.366895 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.366898 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.367120 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.367439 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.367508 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.367736 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.368589 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.373048 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.374955 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.375218 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.375729 5120 scope.go:117] "RemoveContainer" containerID="7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.375943 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.377244 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.377622 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.377800 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.378140 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.378199 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.378661 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.383239 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.392281 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.400930 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.409231 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed157e5a-719a-4b7f-b07e-2aef6920d7ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fd34a347536538da804d4f3d1109839f72d4f80298ad8729c47a279d337b0347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://766ca116d6ac3b5109c2570b2f99d2796613fc2f98455165da04bfbc978569b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://933b951772476b44d81cc1de5e7dd03c3072133c75320a7bf83b29860a415903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0e1727a06a67899b8a0b64428ed30d3aeba0e5847e8d0d68587d3861df6686a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e1727a06a67899b8a0b64428ed30d3aeba0e5847e8d0d68587d3861df6686a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:00:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:00:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.416708 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.425653 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38cdad44-c229-4500-b4e7-92c3cafb0974\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-j58br\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.425708 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-os-release\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.425981 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.426109 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.426246 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/93b9b6b4-9863-4f54-bc53-efeef34239df-serviceca\") pod \"node-ca-ddlz4\" (UID: \"93b9b6b4-9863-4f54-bc53-efeef34239df\") " pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.426317 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf49x\" (UniqueName: \"kubernetes.io/projected/8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af-kube-api-access-zf49x\") pod \"node-resolver-djrpd\" (UID: \"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\") " pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.426392 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.426482 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-cnibin\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.426553 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.426620 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.426729 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.426944 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-var-lib-cni-multus\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427043 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7143452f-c193-4dbf-872c-a3ae9245f158-multus-daemon-config\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427134 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-etc-kubernetes\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.426557 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427369 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.427420 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:28.927400138 +0000 UTC m=+98.181703469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427615 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427653 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-system-cni-dir\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427679 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-run-netns\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427740 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-multus-socket-dir-parent\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427768 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427797 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427818 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-multus-cni-dir\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427832 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7143452f-c193-4dbf-872c-a3ae9245f158-cni-binary-copy\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427849 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-var-lib-cni-bin\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427867 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-multus-conf-dir\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.427882 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.428212 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.428284 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:28.9282724 +0000 UTC m=+98.182575731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428276 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-var-lib-kubelet\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428324 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428347 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93b9b6b4-9863-4f54-bc53-efeef34239df-host\") pod \"node-ca-ddlz4\" (UID: \"93b9b6b4-9863-4f54-bc53-efeef34239df\") " pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428369 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258v7\" (UniqueName: \"kubernetes.io/projected/93b9b6b4-9863-4f54-bc53-efeef34239df-kube-api-access-258v7\") pod \"node-ca-ddlz4\" (UID: \"93b9b6b4-9863-4f54-bc53-efeef34239df\") " pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428392 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmp8v\" (UniqueName: \"kubernetes.io/projected/38cdad44-c229-4500-b4e7-92c3cafb0974-kube-api-access-kmp8v\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428413 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af-hosts-file\") pod \"node-resolver-djrpd\" (UID: \"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\") " pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428436 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-hostroot\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428458 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428478 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g26zk\" (UniqueName: \"kubernetes.io/projected/f1d42362-2047-47d8-b096-bd9f85606eeb-kube-api-access-g26zk\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428503 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428528 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428613 5120 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428663 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428713 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38cdad44-c229-4500-b4e7-92c3cafb0974-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428761 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428795 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-run-k8s-cni-cncf-io\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x2q9\" (UniqueName: \"kubernetes.io/projected/7143452f-c193-4dbf-872c-a3ae9245f158-kube-api-access-6x2q9\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428846 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-run-multus-certs\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.428869 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af-tmp-dir\") pod \"node-resolver-djrpd\" (UID: \"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\") " pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.429001 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.429261 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.431236 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.439431 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-djrpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf49x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:02:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-djrpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.439619 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.439635 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.439646 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.439715 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:28.939698188 +0000 UTC m=+98.194001519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.439759 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.439784 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.440541 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.440568 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.440619 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.441220 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:28.941198566 +0000 UTC m=+98.195501897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.441345 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.441376 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.441387 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.441403 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.441414 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.443589 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.447406 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.448546 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e868a29f-b837-4513-ad30-f5b6c4354a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ch7zn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ch7zn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:02:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fpg9g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.452239 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.458985 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"599e97f6-aab8-4d0f-8d66-720ca1f0756b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc88j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc88j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc88j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc88j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc88j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc88j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc88j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:02:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xmwrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.461902 5120 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.468070 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.475914 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.482627 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ccl9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1d42362-2047-47d8-b096-bd9f85606eeb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g26zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g26zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:02:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ccl9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.489436 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ddlz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b9b6b4-9863-4f54-bc53-efeef34239df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-258v7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:02:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ddlz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.499265 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7eccf842-c196-44ed-bae9-137577128c33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T16:01:59Z\\\",\\\"message\\\":\\\"or=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1765468918\\\\\\\\\\\\\\\" (2025-12-11 16:01:58 +0000 UTC to 2025-12-11 16:01:59 +0000 UTC (now=2025-12-11 16:01:59.685916742 +0000 UTC))\\\\\\\"\\\\nI1211 16:01:59.686202 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI1211 16:01:59.686295 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nI1211 16:01:59.686361 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1765468919\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1765468919\\\\\\\\\\\\\\\" (2025-12-11 15:01:59 +0000 UTC to 2028-12-11 15:01:59 +0000 UTC (now=2025-12-11 16:01:59.686333663 +0000 UTC))\\\\\\\"\\\\nI1211 16:01:59.686398 1 secure_serving.go:211] Serving securely on [::]:17697\\\\nI1211 16:01:59.686454 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1211 16:01:59.686475 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-980361572/tls.crt::/tmp/serving-cert-980361572/tls.key\\\\\\\"\\\\nI1211 16:01:59.686516 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:01:59.686560 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1211 16:01:59.686474 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:01:59.686598 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1211 16:01:59.687038 1 genericapiserver.go:696] [graceful-termination] waiting for shutdown to be initiated\\\\nF1211 16:01:59.688778 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T16:01:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:00:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:00:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:00:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.507759 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee5d4abf-253f-4996-b4c8-d93c2e52a444\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:01:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4c1a7b286db7da7451dd4868274f3e5a6591db27811aa67406f2d0b83001d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6638d34ff1843e072e0e07aee8955f1642fc6ed722b30a744affaca24191a467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b96a0ce7cbc1b5b30172cc8d635a0f6a38edd9ab4341a5ced498d731a842557c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5b00a2ce5994f26ef1f9442d7f86eddb570db24aaf6bd4cf8d6d7d6017ca6cfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:00:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.515102 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.521885 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.527495 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a37eca82-3943-4469-b701-38690186b27c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7744b267a54d739ebcbcd536b7bb137bc74d46f834b8eb3fc5606c29d78b2715\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://474ce51a7861c5bc5feff90183d2f0f8119cb78f6664cf308f3bddac9d48a54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://474ce51a7861c5bc5feff90183d2f0f8119cb78f6664cf308f3bddac9d48a54d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:00:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:00:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529169 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529202 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529218 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529238 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529255 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529270 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529287 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529301 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529317 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529332 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529346 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529362 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529379 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529397 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529413 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529429 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529446 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529463 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529479 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529494 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529512 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529527 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529542 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.529992 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530172 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530222 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530246 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530268 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530290 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530308 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530327 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530351 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530372 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.530374 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.531214 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.531527 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.531568 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.531549 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.531857 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.531966 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532035 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532042 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532175 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532186 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532182 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532219 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532260 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532294 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532326 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532352 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532464 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532514 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532542 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532572 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532600 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532631 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532658 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532685 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532711 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532786 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532831 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532866 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532526 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532617 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532875 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.532889 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.533271 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.533302 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.533553 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.533786 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.533799 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.533789 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.533982 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534230 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534301 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534352 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534355 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534387 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534411 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534434 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534457 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534478 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534495 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534515 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534542 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534573 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534605 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534640 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534672 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534702 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534732 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534761 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534788 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534819 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534850 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534878 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534906 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534942 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534975 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535004 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535036 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535067 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535181 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535212 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535256 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535291 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535319 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535346 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535380 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535410 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535441 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535469 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535503 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535531 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535563 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535596 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535622 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535655 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535688 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535716 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535743 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535772 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535801 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535829 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535859 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535885 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535915 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535942 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535972 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536003 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536033 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536068 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536098 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536130 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536180 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536217 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536256 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536284 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536315 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536343 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536375 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534877 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.534918 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535023 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535036 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535057 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535353 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535475 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535556 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.535933 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536042 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536128 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536262 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536340 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536583 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.537114 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.537121 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.537224 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538197 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538799 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538329 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538468 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538518 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538562 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538654 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538596 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538834 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538949 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538978 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.538845 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.539044 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.539057 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.539328 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.539635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.539818 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.539913 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.539933 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.541398 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.541424 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.541509 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.541834 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.541943 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.542088 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.542309 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.542325 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.542352 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.542478 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.542549 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.542585 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.542817 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.542960 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.543020 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.543065 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.543143 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.543244 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.543429 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.543772 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.536412 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.545638 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.545796 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.545803 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.545833 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.545843 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.545886 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.545889 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.546146 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.546274 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.546426 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.543995 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.546518 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.546522 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.546520 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.546616 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.546960 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547035 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547297 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547371 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547375 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547602 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547808 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547834 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547851 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547871 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547943 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.547893 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548024 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548050 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548072 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548111 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548136 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548175 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548199 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548220 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548241 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548264 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548286 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548284 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548310 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548310 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548343 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548371 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548391 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548406 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548422 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548439 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548463 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548478 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548494 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548512 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548528 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548548 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548566 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548583 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548598 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548613 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548629 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548648 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548665 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548682 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548699 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548716 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548732 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548748 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548764 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548782 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548820 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548837 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548854 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548874 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548889 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548907 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548927 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548945 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548963 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548980 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549002 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549025 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549047 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549064 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549081 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549100 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549122 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549139 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549182 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549200 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549221 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549239 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549256 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549275 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549292 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549309 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549327 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549344 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549362 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549379 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549395 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549414 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549434 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549451 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549469 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549779 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549706 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a4d5153-bf50-4a78-806d-ac9c2e7d4ff6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:01:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f0702dcea334f0b923b7278d7a558f48faf3eddeb3ff31dd35a839d4efe35d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://58e6c0d1cb74646226154f4f31743874a8dea02e5529fecbdf8c6bbf0cf109d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a88d2b9d7e1fdfd36ac0c9d78c11e05bc4c426d388675ade7675da768123d7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8646f512b9353647d4931acafb053b650dae9e1b7f9414fda76dcbdfa278b53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d641e9eddc098744fc7c612b4ede6f0e8eef1b5aba7b4a20d5b9b53791dc007c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T16:00:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ea7d8d5242e079f06a3da14aa39458183802880ada73a3fe2cf37ef44cf670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ea7d8d5242e079f06a3da14aa39458183802880ada73a3fe2cf37ef44cf670a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:00:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:00:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a762cd77cd8e263859222842615426dc67c1d88b009dfdb8e95b1dd46052d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a762cd77cd8e263859222842615426dc67c1d88b009dfdb8e95b1dd46052d25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:00:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:00:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://101ba2a0e7f8e531e25d62b429ef52fc025f1d4dd85a8a99292dc727617bdc7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://101ba2a0e7f8e531e25d62b429ef52fc025f1d4dd85a8a99292dc727617bdc7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T16:00:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T16:00:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:00:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549837 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549848 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551097 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551122 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551139 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551183 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551204 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551222 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551241 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551265 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551287 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551308 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551347 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551397 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551557 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551583 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551608 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551760 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552370 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552403 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552424 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552446 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552521 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-run-k8s-cni-cncf-io\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552585 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6x2q9\" (UniqueName: \"kubernetes.io/projected/7143452f-c193-4dbf-872c-a3ae9245f158-kube-api-access-6x2q9\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552610 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552640 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-run-multus-certs\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552661 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af-tmp-dir\") pod \"node-resolver-djrpd\" (UID: \"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\") " pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552685 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-os-release\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552709 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552744 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/93b9b6b4-9863-4f54-bc53-efeef34239df-serviceca\") pod \"node-ca-ddlz4\" (UID: \"93b9b6b4-9863-4f54-bc53-efeef34239df\") " pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552764 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zf49x\" (UniqueName: \"kubernetes.io/projected/8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af-kube-api-access-zf49x\") pod \"node-resolver-djrpd\" (UID: \"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\") " pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552783 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/599e97f6-aab8-4d0f-8d66-720ca1f0756b-cni-binary-copy\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552803 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/599e97f6-aab8-4d0f-8d66-720ca1f0756b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552836 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-cnibin\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552858 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552875 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552891 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-node-log\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552908 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-os-release\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552933 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-var-lib-cni-multus\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552953 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7143452f-c193-4dbf-872c-a3ae9245f158-multus-daemon-config\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552974 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-etc-kubernetes\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552997 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553018 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-system-cni-dir\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553045 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553065 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d25df28f-e707-49ec-a539-9f1d1b40a297-ovn-node-metrics-cert\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553083 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-system-cni-dir\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553102 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/599e97f6-aab8-4d0f-8d66-720ca1f0756b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553123 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-run-netns\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553740 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e868a29f-b837-4513-ad30-f5b6c4354a09-proxy-tls\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553864 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e868a29f-b837-4513-ad30-f5b6c4354a09-mcd-auth-proxy-config\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553977 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-var-lib-openvswitch\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554091 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-openvswitch\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554204 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-ovn\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554336 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-multus-socket-dir-parent\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554434 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-log-socket\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-kubelet\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554634 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-542ds\" (UniqueName: \"kubernetes.io/projected/d25df28f-e707-49ec-a539-9f1d1b40a297-kube-api-access-542ds\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554742 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-slash\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554830 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-etc-openvswitch\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554912 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-bin\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555084 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-multus-cni-dir\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555228 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7143452f-c193-4dbf-872c-a3ae9245f158-cni-binary-copy\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555321 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-var-lib-cni-bin\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555413 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-multus-conf-dir\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.548542 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.549959 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550015 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550274 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555530 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550331 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550339 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550349 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550393 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550603 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550623 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550625 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.550839 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551240 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551310 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551359 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551741 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551902 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.551977 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.552268 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553222 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553287 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553396 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553654 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553736 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553864 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.553899 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554117 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554193 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554203 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555797 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554257 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.554641 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555059 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555261 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555276 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555465 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555636 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.555734 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556133 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556142 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556213 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556323 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-run-multus-certs\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556344 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556378 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556400 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556575 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556683 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556731 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af-tmp-dir\") pod \"node-resolver-djrpd\" (UID: \"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\") " pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.556808 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-os-release\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557220 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557266 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557352 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557488 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-var-lib-kubelet\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557567 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93b9b6b4-9863-4f54-bc53-efeef34239df-host\") pod \"node-ca-ddlz4\" (UID: \"93b9b6b4-9863-4f54-bc53-efeef34239df\") " pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557593 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557626 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557847 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-var-lib-cni-multus\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557900 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.557912 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558199 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-258v7\" (UniqueName: \"kubernetes.io/projected/93b9b6b4-9863-4f54-bc53-efeef34239df-kube-api-access-258v7\") pod \"node-ca-ddlz4\" (UID: \"93b9b6b4-9863-4f54-bc53-efeef34239df\") " pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558248 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmp8v\" (UniqueName: \"kubernetes.io/projected/38cdad44-c229-4500-b4e7-92c3cafb0974-kube-api-access-kmp8v\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558281 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af-hosts-file\") pod \"node-resolver-djrpd\" (UID: \"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\") " pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558309 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-systemd-units\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558311 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558365 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-hostroot\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558393 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558402 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558432 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g26zk\" (UniqueName: \"kubernetes.io/projected/f1d42362-2047-47d8-b096-bd9f85606eeb-kube-api-access-g26zk\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558460 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e868a29f-b837-4513-ad30-f5b6c4354a09-rootfs\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558482 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7zn\" (UniqueName: \"kubernetes.io/projected/e868a29f-b837-4513-ad30-f5b6c4354a09-kube-api-access-ch7zn\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558504 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-config\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558532 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-env-overrides\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558549 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-cnibin\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558565 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558735 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/93b9b6b4-9863-4f54-bc53-efeef34239df-serviceca\") pod \"node-ca-ddlz4\" (UID: \"93b9b6b4-9863-4f54-bc53-efeef34239df\") " pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558820 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-cnibin\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558871 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558880 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558926 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559360 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559383 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559573 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559595 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559729 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559761 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559802 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559974 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560041 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559579 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-system-cni-dir\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560230 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af-hosts-file\") pod \"node-resolver-djrpd\" (UID: \"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\") " pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560615 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7143452f-c193-4dbf-872c-a3ae9245f158-cni-binary-copy\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560648 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-var-lib-cni-bin\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560674 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-multus-conf-dir\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560851 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-var-lib-kubelet\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560876 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93b9b6b4-9863-4f54-bc53-efeef34239df-host\") pod \"node-ca-ddlz4\" (UID: \"93b9b6b4-9863-4f54-bc53-efeef34239df\") " pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.561097 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7143452f-c193-4dbf-872c-a3ae9245f158-multus-daemon-config\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560498 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560555 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560560 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560598 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.560824 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:29.060803423 +0000 UTC m=+98.315106754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.561138 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559732 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-multus-socket-dir-parent\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559407 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.561630 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559876 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-multus-cni-dir\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.558207 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559768 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-run-k8s-cni-cncf-io\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559499 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559422 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-etc-kubernetes\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.559643 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-host-run-netns\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560793 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560865 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.560887 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.561761 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc88j\" (UniqueName: \"kubernetes.io/projected/599e97f6-aab8-4d0f-8d66-720ca1f0756b-kube-api-access-gc88j\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.561805 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38cdad44-c229-4500-b4e7-92c3cafb0974-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.561831 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-netns\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.561851 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-systemd\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.561867 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-netd\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.561881 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-script-lib\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.568857 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.568961 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.568993 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.569338 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs podName:f1d42362-2047-47d8-b096-bd9f85606eeb nodeName:}" failed. No retries permitted until 2025-12-11 16:02:29.069322068 +0000 UTC m=+98.323625399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs") pod "network-metrics-daemon-ccl9q" (UID: "f1d42362-2047-47d8-b096-bd9f85606eeb") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569457 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569548 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7143452f-c193-4dbf-872c-a3ae9245f158-hostroot\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569707 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569749 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569766 5120 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569781 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569796 5120 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569810 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569822 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569835 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569848 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569848 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569862 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569879 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569893 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.569593 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570188 5120 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570210 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570222 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570235 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570248 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570259 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570271 5120 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570283 5120 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570294 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570306 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570318 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570329 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570341 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570352 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570365 5120 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570378 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570390 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570402 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570413 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570425 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570438 5120 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570451 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570465 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570477 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570489 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570501 5120 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570513 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570525 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570537 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570548 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570559 5120 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570573 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570591 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570603 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570614 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570626 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570637 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570648 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570659 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570670 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570684 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570697 5120 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570708 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570720 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570731 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570743 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570755 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570767 5120 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570778 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570789 5120 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570801 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570813 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570825 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570836 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570847 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570858 5120 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570870 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570881 5120 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570895 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570909 5120 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570921 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570933 5120 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570945 5120 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570969 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570983 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.570994 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571006 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571018 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571036 5120 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571049 5120 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571060 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571070 5120 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571080 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571090 5120 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571101 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571409 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571519 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571580 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571844 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571877 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571896 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571935 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.571978 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572081 5120 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572104 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572174 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572196 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572216 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572230 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572264 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572280 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572293 5120 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572306 5120 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572317 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572329 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572341 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572354 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572372 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572384 5120 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572399 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572411 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572423 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572433 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572444 5120 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572456 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572467 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572479 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572494 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572510 5120 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572521 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572532 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572544 5120 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572556 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572567 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572579 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572591 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572603 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572614 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572625 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572636 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572648 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572659 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572671 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572682 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572694 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572705 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572717 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572727 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572739 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572752 5120 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572764 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572776 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572787 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572800 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572813 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572826 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572839 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572850 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572864 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572881 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572896 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572907 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572918 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572931 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572943 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572954 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572965 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572977 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.572989 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573002 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573013 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573025 5120 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573037 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573048 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573060 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573072 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573084 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573095 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573106 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573118 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573130 5120 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573142 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573170 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573182 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573195 5120 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573207 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.573218 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.574846 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.574962 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.575077 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.575277 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.575859 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38cdad44-c229-4500-b4e7-92c3cafb0974-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.576022 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmp8v\" (UniqueName: \"kubernetes.io/projected/38cdad44-c229-4500-b4e7-92c3cafb0974-kube-api-access-kmp8v\") pod \"ovnkube-control-plane-57b78d8988-j58br\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.576074 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-258v7\" (UniqueName: \"kubernetes.io/projected/93b9b6b4-9863-4f54-bc53-efeef34239df-kube-api-access-258v7\") pod \"node-ca-ddlz4\" (UID: \"93b9b6b4-9863-4f54-bc53-efeef34239df\") " pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.576294 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-qzwn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7143452f-c193-4dbf-872c-a3ae9245f158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6x2q9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:02:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzwn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.576382 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.576440 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf49x\" (UniqueName: \"kubernetes.io/projected/8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af-kube-api-access-zf49x\") pod \"node-resolver-djrpd\" (UID: \"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af\") " pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.577569 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x2q9\" (UniqueName: \"kubernetes.io/projected/7143452f-c193-4dbf-872c-a3ae9245f158-kube-api-access-6x2q9\") pod \"multus-qzwn6\" (UID: \"7143452f-c193-4dbf-872c-a3ae9245f158\") " pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.577930 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.577938 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.577987 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.578054 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.578316 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.578471 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.578499 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.578559 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.578861 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.579091 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.579560 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.579650 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.579956 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.580443 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.580599 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.580648 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.580861 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.580894 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.580932 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.581333 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.581424 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.581578 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.581635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.582194 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.585447 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.586909 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g26zk\" (UniqueName: \"kubernetes.io/projected/f1d42362-2047-47d8-b096-bd9f85606eeb-kube-api-access-g26zk\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.591833 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25df28f-e707-49ec-a539-9f1d1b40a297\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T16:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-542ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-542ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-542ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-542ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-542ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-542ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-542ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-542ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-542ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T16:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.592644 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.595336 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.595809 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.606830 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.612382 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.636888 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.643527 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.649790 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.651205 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.651239 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.651248 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.651267 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.651277 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: W1211 16:02:28.653035 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-4eaf28c9bf79307793939bba2ad33a4ee9444af12d379559637aa1cb636f38f8 WatchSource:0}: Error finding container 4eaf28c9bf79307793939bba2ad33a4ee9444af12d379559637aa1cb636f38f8: Status 404 returned error can't find the container with id 4eaf28c9bf79307793939bba2ad33a4ee9444af12d379559637aa1cb636f38f8 Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.668410 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qzwn6" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.673877 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-node-log\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.673911 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-os-release\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674027 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674051 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d25df28f-e707-49ec-a539-9f1d1b40a297-ovn-node-metrics-cert\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674071 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-system-cni-dir\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674092 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/599e97f6-aab8-4d0f-8d66-720ca1f0756b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674112 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-node-log\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674365 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-system-cni-dir\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674460 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-os-release\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674114 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e868a29f-b837-4513-ad30-f5b6c4354a09-proxy-tls\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674514 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e868a29f-b837-4513-ad30-f5b6c4354a09-mcd-auth-proxy-config\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674540 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-var-lib-openvswitch\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674561 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-openvswitch\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674587 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-ovn\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674613 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-log-socket\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674635 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-kubelet\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674657 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-542ds\" (UniqueName: \"kubernetes.io/projected/d25df28f-e707-49ec-a539-9f1d1b40a297-kube-api-access-542ds\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674706 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-slash\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674730 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-etc-openvswitch\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674752 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-bin\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674791 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-systemd-units\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674827 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e868a29f-b837-4513-ad30-f5b6c4354a09-rootfs\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674849 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7zn\" (UniqueName: \"kubernetes.io/projected/e868a29f-b837-4513-ad30-f5b6c4354a09-kube-api-access-ch7zn\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674871 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-config\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674892 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-env-overrides\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674913 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-cnibin\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674932 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674952 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gc88j\" (UniqueName: \"kubernetes.io/projected/599e97f6-aab8-4d0f-8d66-720ca1f0756b-kube-api-access-gc88j\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674979 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-netns\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.674999 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-systemd\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675025 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-netd\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675046 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-script-lib\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675071 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675116 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/599e97f6-aab8-4d0f-8d66-720ca1f0756b-cni-binary-copy\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675137 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/599e97f6-aab8-4d0f-8d66-720ca1f0756b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675201 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e868a29f-b837-4513-ad30-f5b6c4354a09-mcd-auth-proxy-config\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675208 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/599e97f6-aab8-4d0f-8d66-720ca1f0756b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675230 5120 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675248 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675262 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675280 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675291 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-netns\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675294 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675354 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675369 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675382 5120 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675413 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675426 5120 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675438 5120 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675449 5120 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675462 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675496 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675513 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675526 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675539 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675551 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-openvswitch\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675551 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675587 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675599 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675612 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675625 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675648 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-kubelet\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675660 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675675 5120 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675690 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675702 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675738 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675754 5120 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675767 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675781 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675817 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675833 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675840 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-slash\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675845 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675866 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675879 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675892 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675903 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675916 5120 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675927 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675937 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675948 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675960 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675971 5120 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675983 5120 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675996 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676008 5120 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676021 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676033 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676045 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676056 5120 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676067 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676079 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675624 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-log-socket\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676124 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-etc-openvswitch\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676173 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-bin\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676208 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-systemd-units\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676239 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e868a29f-b837-4513-ad30-f5b6c4354a09-rootfs\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676355 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676382 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.676422 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.677079 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/599e97f6-aab8-4d0f-8d66-720ca1f0756b-cni-binary-copy\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.677179 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-systemd\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.677263 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-netd\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675522 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-var-lib-openvswitch\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.677307 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/599e97f6-aab8-4d0f-8d66-720ca1f0756b-cnibin\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.675594 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-ovn\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.677451 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-config\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.678039 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-env-overrides\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.678331 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/599e97f6-aab8-4d0f-8d66-720ca1f0756b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.678585 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-script-lib\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.679235 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d25df28f-e707-49ec-a539-9f1d1b40a297-ovn-node-metrics-cert\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.679759 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ddlz4" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.680267 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e868a29f-b837-4513-ad30-f5b6c4354a09-proxy-tls\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.690425 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7zn\" (UniqueName: \"kubernetes.io/projected/e868a29f-b837-4513-ad30-f5b6c4354a09-kube-api-access-ch7zn\") pod \"machine-config-daemon-fpg9g\" (UID: \"e868a29f-b837-4513-ad30-f5b6c4354a09\") " pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: W1211 16:02:28.691207 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7143452f_c193_4dbf_872c_a3ae9245f158.slice/crio-730a2ba9cbd7f6f5fdabb424776986fadc7258b9d6435de57472ff008f3615e8 WatchSource:0}: Error finding container 730a2ba9cbd7f6f5fdabb424776986fadc7258b9d6435de57472ff008f3615e8: Status 404 returned error can't find the container with id 730a2ba9cbd7f6f5fdabb424776986fadc7258b9d6435de57472ff008f3615e8 Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.691368 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc88j\" (UniqueName: \"kubernetes.io/projected/599e97f6-aab8-4d0f-8d66-720ca1f0756b-kube-api-access-gc88j\") pod \"multus-additional-cni-plugins-xmwrh\" (UID: \"599e97f6-aab8-4d0f-8d66-720ca1f0756b\") " pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.692303 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-542ds\" (UniqueName: \"kubernetes.io/projected/d25df28f-e707-49ec-a539-9f1d1b40a297-kube-api-access-542ds\") pod \"ovnkube-node-bxt85\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.694130 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:02:28 crc kubenswrapper[5120]: W1211 16:02:28.701765 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b9b6b4_9863_4f54_bc53_efeef34239df.slice/crio-afc9b2e7cd35ff8f0d83dfaa5e136df20776e7dd228ae78cfed4cf2e3b1e9ed7 WatchSource:0}: Error finding container afc9b2e7cd35ff8f0d83dfaa5e136df20776e7dd228ae78cfed4cf2e3b1e9ed7: Status 404 returned error can't find the container with id afc9b2e7cd35ff8f0d83dfaa5e136df20776e7dd228ae78cfed4cf2e3b1e9ed7 Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.705060 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-djrpd" Dec 11 16:02:28 crc kubenswrapper[5120]: W1211 16:02:28.706584 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38cdad44_c229_4500_b4e7_92c3cafb0974.slice/crio-fc45727ed5c2d8d65b34cfac06d3e49a46bd823b9502d3f8b0d727a70ee20359 WatchSource:0}: Error finding container fc45727ed5c2d8d65b34cfac06d3e49a46bd823b9502d3f8b0d727a70ee20359: Status 404 returned error can't find the container with id fc45727ed5c2d8d65b34cfac06d3e49a46bd823b9502d3f8b0d727a70ee20359 Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.716895 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.728315 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.739914 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" Dec 11 16:02:28 crc kubenswrapper[5120]: W1211 16:02:28.744101 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode868a29f_b837_4513_ad30_f5b6c4354a09.slice/crio-8d5c335847691900df364032761dc34f0f8491c687104b395972af36342bdf9e WatchSource:0}: Error finding container 8d5c335847691900df364032761dc34f0f8491c687104b395972af36342bdf9e: Status 404 returned error can't find the container with id 8d5c335847691900df364032761dc34f0f8491c687104b395972af36342bdf9e Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.752636 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.752667 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.752677 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.752691 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.752700 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: W1211 16:02:28.768495 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd25df28f_e707_49ec_a539_9f1d1b40a297.slice/crio-69f1b544aeee78b0ad6657cc245609d0a90d9bac27f26c85945bc150eac13fee WatchSource:0}: Error finding container 69f1b544aeee78b0ad6657cc245609d0a90d9bac27f26c85945bc150eac13fee: Status 404 returned error can't find the container with id 69f1b544aeee78b0ad6657cc245609d0a90d9bac27f26c85945bc150eac13fee Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.861410 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.861869 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.861884 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.862036 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.862053 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.970829 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.970878 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.970889 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.970907 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.970920 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:28Z","lastTransitionTime":"2025-12-11T16:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.978509 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.978552 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.978574 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978649 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: I1211 16:02:28.978649 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978697 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:29.978684514 +0000 UTC m=+99.232987845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978700 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978724 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978735 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978779 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978791 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:29.978774996 +0000 UTC m=+99.233078327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978831 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978842 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978850 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978901 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:29.978891589 +0000 UTC m=+99.233194920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:28 crc kubenswrapper[5120]: E1211 16:02:28.978920 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:29.97891392 +0000 UTC m=+99.233217251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.025964 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.026817 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.028121 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.029060 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.030825 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.032565 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.033873 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.036436 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.036996 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.038551 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.040645 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.046597 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.048051 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.052702 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.053170 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.056316 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.057089 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.059433 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.062067 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.065050 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.065955 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.067761 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.068450 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.070315 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.071132 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.072528 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.073322 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.073788 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.073821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.073830 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.073845 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.073855 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:29Z","lastTransitionTime":"2025-12-11T16:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.074591 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.076817 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.077470 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.080753 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.080814 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.080871 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.080957 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:30.080925973 +0000 UTC m=+99.335229304 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.080971 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.081117 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs podName:f1d42362-2047-47d8-b096-bd9f85606eeb nodeName:}" failed. No retries permitted until 2025-12-11 16:02:30.081092267 +0000 UTC m=+99.335395598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs") pod "network-metrics-daemon-ccl9q" (UID: "f1d42362-2047-47d8-b096-bd9f85606eeb") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.084837 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.087142 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.088898 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.091543 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.092839 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.093570 5120 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.093679 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.097050 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.099162 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.099909 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.101420 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.101878 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.103219 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.104050 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.104587 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.105802 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.106916 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.108124 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.108824 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.109879 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.110676 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.111809 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.113351 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.114570 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.115654 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.116573 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.117672 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.176245 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.176616 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.176629 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.176646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.176659 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:29Z","lastTransitionTime":"2025-12-11T16:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.279414 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.279463 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.279475 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.279492 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.279504 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:29Z","lastTransitionTime":"2025-12-11T16:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.347474 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" event={"ID":"38cdad44-c229-4500-b4e7-92c3cafb0974","Type":"ContainerStarted","Data":"9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.347832 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" event={"ID":"38cdad44-c229-4500-b4e7-92c3cafb0974","Type":"ContainerStarted","Data":"d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.347848 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" event={"ID":"38cdad44-c229-4500-b4e7-92c3cafb0974","Type":"ContainerStarted","Data":"fc45727ed5c2d8d65b34cfac06d3e49a46bd823b9502d3f8b0d727a70ee20359"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.348447 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ddlz4" event={"ID":"93b9b6b4-9863-4f54-bc53-efeef34239df","Type":"ContainerStarted","Data":"687da4f6eaf8fa626a82c849da4d50074ab6eef07eef1d76aa655755a767dae5"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.348478 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ddlz4" event={"ID":"93b9b6b4-9863-4f54-bc53-efeef34239df","Type":"ContainerStarted","Data":"afc9b2e7cd35ff8f0d83dfaa5e136df20776e7dd228ae78cfed4cf2e3b1e9ed7"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.349298 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"88691a86acf79c84c6a61b85d5466005a845846c94bc5c1f9a73083c527088c8"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.350361 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c9f06b71b003273845de0d493e381c903c3eaa5b90a6861b017ca91f4816aa29"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.350389 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"f7623ec1820fa9ab7d07a430d1493828b745309c2087519844ad0c9b4936b8c7"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.352831 5120 generic.go:358] "Generic (PLEG): container finished" podID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerID="d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4" exitCode=0 Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.352862 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.352908 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerStarted","Data":"69f1b544aeee78b0ad6657cc245609d0a90d9bac27f26c85945bc150eac13fee"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.355659 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"36760beeec35d57e7b8428c55137eefead660a5489c611c8f78659cbd0fcf6d3"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.355692 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"f727e85852e41eadd8ab053815cd79b1a3784bdd564ba2f102365b28fb10e23a"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.355711 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"4eaf28c9bf79307793939bba2ad33a4ee9444af12d379559637aa1cb636f38f8"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.356987 5120 generic.go:358] "Generic (PLEG): container finished" podID="599e97f6-aab8-4d0f-8d66-720ca1f0756b" containerID="1848fc22e1fce0adbc010d71ff3b11e8ecbc405cfc9aecef8834c9aec01048bc" exitCode=0 Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.357061 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" event={"ID":"599e97f6-aab8-4d0f-8d66-720ca1f0756b","Type":"ContainerDied","Data":"1848fc22e1fce0adbc010d71ff3b11e8ecbc405cfc9aecef8834c9aec01048bc"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.357086 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" event={"ID":"599e97f6-aab8-4d0f-8d66-720ca1f0756b","Type":"ContainerStarted","Data":"a5eaca6306e6913dd289f3bd1f7a34292c92d56a3ae17e95bc8fc1ea9af80a85"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.360407 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"b4a8a7e8e5957f6340282c016348d8e3e7930585d49fe44ab42dc46249eb82bc"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.360452 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"8bded36918f78e5f934a2e80f529e76f291507fa4d302555d0d9666f63505ab7"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.360465 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"8d5c335847691900df364032761dc34f0f8491c687104b395972af36342bdf9e"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.362264 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-djrpd" event={"ID":"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af","Type":"ContainerStarted","Data":"c18e0b8139b7d2ae14d6cbe5c50f6eca8a7a463df09320a82c8574147f0c2369"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.362322 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-djrpd" event={"ID":"8b8c6fde-f7c0-40b3-a70d-dcf2c8d865af","Type":"ContainerStarted","Data":"ffaf0afb36b559653e97544748b806caabc2b49b93bcce8a5d66bfaee55d7c54"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.363926 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzwn6" event={"ID":"7143452f-c193-4dbf-872c-a3ae9245f158","Type":"ContainerStarted","Data":"5ab84bc58f9b4e30ab36c3bd52b5c52d1fbb38194251ce2275df6f68ea13b270"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.363955 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzwn6" event={"ID":"7143452f-c193-4dbf-872c-a3ae9245f158","Type":"ContainerStarted","Data":"730a2ba9cbd7f6f5fdabb424776986fadc7258b9d6435de57472ff008f3615e8"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.383975 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.383959837 podStartE2EDuration="2.383959837s" podCreationTimestamp="2025-12-11 16:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:29.383393063 +0000 UTC m=+98.637696414" watchObservedRunningTime="2025-12-11 16:02:29.383959837 +0000 UTC m=+98.638263178" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.388343 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.388383 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.388395 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.388410 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.388422 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:29Z","lastTransitionTime":"2025-12-11T16:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.454745 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.454728063 podStartE2EDuration="2.454728063s" podCreationTimestamp="2025-12-11 16:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:29.454451086 +0000 UTC m=+98.708754407" watchObservedRunningTime="2025-12-11 16:02:29.454728063 +0000 UTC m=+98.709031394" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.494078 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.494120 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.494129 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.494169 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.494180 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:29Z","lastTransitionTime":"2025-12-11T16:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.497879 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" podStartSLOduration=80.497867001 podStartE2EDuration="1m20.497867001s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:29.487873239 +0000 UTC m=+98.742176570" watchObservedRunningTime="2025-12-11 16:02:29.497867001 +0000 UTC m=+98.752170332" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.597470 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.597516 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.597525 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.597540 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.597549 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:29Z","lastTransitionTime":"2025-12-11T16:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.614804 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.614790981 podStartE2EDuration="1.614790981s" podCreationTimestamp="2025-12-11 16:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:29.613389315 +0000 UTC m=+98.867692646" watchObservedRunningTime="2025-12-11 16:02:29.614790981 +0000 UTC m=+98.869094312" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.648283 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.648270265 podStartE2EDuration="2.648270265s" podCreationTimestamp="2025-12-11 16:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:29.647481065 +0000 UTC m=+98.901784406" watchObservedRunningTime="2025-12-11 16:02:29.648270265 +0000 UTC m=+98.902573586" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.678978 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qzwn6" podStartSLOduration=80.678961469 podStartE2EDuration="1m20.678961469s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:29.678135089 +0000 UTC m=+98.932438410" watchObservedRunningTime="2025-12-11 16:02:29.678961469 +0000 UTC m=+98.933264800" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.699904 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.700169 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.700261 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.700326 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.700383 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:29Z","lastTransitionTime":"2025-12-11T16:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.711922 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-djrpd" podStartSLOduration=80.7119017 podStartE2EDuration="1m20.7119017s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:29.711004228 +0000 UTC m=+98.965307569" watchObservedRunningTime="2025-12-11 16:02:29.7119017 +0000 UTC m=+98.966205031" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.723684 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podStartSLOduration=80.723662457 podStartE2EDuration="1m20.723662457s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:29.722681732 +0000 UTC m=+98.976985063" watchObservedRunningTime="2025-12-11 16:02:29.723662457 +0000 UTC m=+98.977965798" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.790048 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ddlz4" podStartSLOduration=80.790033081 podStartE2EDuration="1m20.790033081s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:29.788947234 +0000 UTC m=+99.043250575" watchObservedRunningTime="2025-12-11 16:02:29.790033081 +0000 UTC m=+99.044336402" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.802494 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.802536 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.802547 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.802562 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.802572 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:29Z","lastTransitionTime":"2025-12-11T16:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.904364 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.904641 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.904651 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.904664 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.904673 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:29Z","lastTransitionTime":"2025-12-11T16:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.992830 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.992892 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.992932 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:29 crc kubenswrapper[5120]: I1211 16:02:29.992957 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993055 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993104 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993125 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993141 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993173 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:31.993128995 +0000 UTC m=+101.247432326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993334 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:31.993317389 +0000 UTC m=+101.247620720 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993441 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993452 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993460 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993457 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993502 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:31.993493524 +0000 UTC m=+101.247796845 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:29 crc kubenswrapper[5120]: E1211 16:02:29.993723 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:31.993702609 +0000 UTC m=+101.248005940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.010213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.010246 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.010254 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.010268 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.010276 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.021222 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:30 crc kubenswrapper[5120]: E1211 16:02:30.021320 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.021382 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:30 crc kubenswrapper[5120]: E1211 16:02:30.021433 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.021498 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:30 crc kubenswrapper[5120]: E1211 16:02:30.021549 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ccl9q" podUID="f1d42362-2047-47d8-b096-bd9f85606eeb" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.021596 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:30 crc kubenswrapper[5120]: E1211 16:02:30.021636 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.094092 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:30 crc kubenswrapper[5120]: E1211 16:02:30.094292 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:32.094256866 +0000 UTC m=+101.348560197 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.094402 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:30 crc kubenswrapper[5120]: E1211 16:02:30.094532 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:30 crc kubenswrapper[5120]: E1211 16:02:30.094578 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs podName:f1d42362-2047-47d8-b096-bd9f85606eeb nodeName:}" failed. No retries permitted until 2025-12-11 16:02:32.094571294 +0000 UTC m=+101.348874625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs") pod "network-metrics-daemon-ccl9q" (UID: "f1d42362-2047-47d8-b096-bd9f85606eeb") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.112571 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.112603 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.112612 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.112626 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.112634 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.214728 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.214766 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.214778 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.214794 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.214805 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.317107 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.317199 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.317211 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.317234 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.317248 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.369412 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerStarted","Data":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.369456 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerStarted","Data":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.369467 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerStarted","Data":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.369475 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerStarted","Data":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.369487 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerStarted","Data":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.370908 5120 generic.go:358] "Generic (PLEG): container finished" podID="599e97f6-aab8-4d0f-8d66-720ca1f0756b" containerID="6373e597f3c75d7439f77fe2e315fbb66f9e152ee64df604b24aef6027c8d240" exitCode=0 Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.370964 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" event={"ID":"599e97f6-aab8-4d0f-8d66-720ca1f0756b","Type":"ContainerDied","Data":"6373e597f3c75d7439f77fe2e315fbb66f9e152ee64df604b24aef6027c8d240"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.419867 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.419926 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.419941 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.419959 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.419973 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.522086 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.522391 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.522406 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.522421 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.522431 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.624552 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.624592 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.624601 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.624614 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.624624 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.726808 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.726847 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.726856 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.726868 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.726877 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.828373 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.828412 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.828421 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.828434 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.828443 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.930014 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.930064 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.930076 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.930095 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:30 crc kubenswrapper[5120]: I1211 16:02:30.930112 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:30Z","lastTransitionTime":"2025-12-11T16:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.031628 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.031668 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.031676 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.031688 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.031697 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:31Z","lastTransitionTime":"2025-12-11T16:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.133535 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.133575 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.133584 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.133597 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.133605 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:31Z","lastTransitionTime":"2025-12-11T16:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.235481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.235528 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.235538 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.235553 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.235565 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:31Z","lastTransitionTime":"2025-12-11T16:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.338221 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.338267 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.338280 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.338299 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.338311 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:31Z","lastTransitionTime":"2025-12-11T16:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.377517 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerStarted","Data":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.379248 5120 generic.go:358] "Generic (PLEG): container finished" podID="599e97f6-aab8-4d0f-8d66-720ca1f0756b" containerID="531af4ae2ddada1d354f749cb51fa255ddc4f1c4183c4f523ae0d5f253e84634" exitCode=0 Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.379290 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" event={"ID":"599e97f6-aab8-4d0f-8d66-720ca1f0756b","Type":"ContainerDied","Data":"531af4ae2ddada1d354f749cb51fa255ddc4f1c4183c4f523ae0d5f253e84634"} Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.440740 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.440786 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.440795 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.440811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.440821 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:31Z","lastTransitionTime":"2025-12-11T16:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.497017 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.497068 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.497081 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.497098 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.497113 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T16:02:31Z","lastTransitionTime":"2025-12-11T16:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.532351 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh"] Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.536649 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.538028 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.538634 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.538742 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.539337 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.614503 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.614565 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.614587 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.614649 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.614675 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.715657 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.715698 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.715715 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.715875 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.715905 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.715999 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.715944 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.717276 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.721467 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.736728 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe157998-3f7f-49cc-87e8-61ddf00f3cb0-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-xgjzh\" (UID: \"fe157998-3f7f-49cc-87e8-61ddf00f3cb0\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: I1211 16:02:31.872474 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" Dec 11 16:02:31 crc kubenswrapper[5120]: W1211 16:02:31.886772 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe157998_3f7f_49cc_87e8_61ddf00f3cb0.slice/crio-0bf71dd2a54495527faf7df965c3416574af7b2fc26b668a003edb26f5090ea5 WatchSource:0}: Error finding container 0bf71dd2a54495527faf7df965c3416574af7b2fc26b668a003edb26f5090ea5: Status 404 returned error can't find the container with id 0bf71dd2a54495527faf7df965c3416574af7b2fc26b668a003edb26f5090ea5 Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.019582 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.019634 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.019717 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.019783 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:36.01976237 +0000 UTC m=+105.274065721 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.019827 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.019860 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.019883 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.019908 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.020002 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.020018 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.020059 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:36.020040257 +0000 UTC m=+105.274343588 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.019970 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.020105 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.020122 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.020125 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:36.020102689 +0000 UTC m=+105.274406110 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.020192 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:36.020178831 +0000 UTC m=+105.274482162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.021461 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.021494 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.021585 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.021582 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ccl9q" podUID="f1d42362-2047-47d8-b096-bd9f85606eeb" Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.021713 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.021815 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.021868 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.021958 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.120888 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.121199 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:36.121138808 +0000 UTC m=+105.375442139 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.121368 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.121528 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: E1211 16:02:32.121627 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs podName:f1d42362-2047-47d8-b096-bd9f85606eeb nodeName:}" failed. No retries permitted until 2025-12-11 16:02:36.121607349 +0000 UTC m=+105.375910680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs") pod "network-metrics-daemon-ccl9q" (UID: "f1d42362-2047-47d8-b096-bd9f85606eeb") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.315653 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.324642 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.383219 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"4c9b4bf746c96ff5f853ecdf79be697d89d17f75430a9589eaefb70e10617889"} Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.385118 5120 generic.go:358] "Generic (PLEG): container finished" podID="599e97f6-aab8-4d0f-8d66-720ca1f0756b" containerID="c5a5877c355c9b526708a466a5f9a9f8e64b82f1ae8ac711866db17df16d5854" exitCode=0 Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.385171 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" event={"ID":"599e97f6-aab8-4d0f-8d66-720ca1f0756b","Type":"ContainerDied","Data":"c5a5877c355c9b526708a466a5f9a9f8e64b82f1ae8ac711866db17df16d5854"} Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.386779 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" event={"ID":"fe157998-3f7f-49cc-87e8-61ddf00f3cb0","Type":"ContainerStarted","Data":"c050957921305041a8fc3ba73bd44c9c7504de72a1b31bbb915db9e621d754fa"} Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.386807 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" event={"ID":"fe157998-3f7f-49cc-87e8-61ddf00f3cb0","Type":"ContainerStarted","Data":"0bf71dd2a54495527faf7df965c3416574af7b2fc26b668a003edb26f5090ea5"} Dec 11 16:02:32 crc kubenswrapper[5120]: I1211 16:02:32.430835 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xgjzh" podStartSLOduration=83.430819459 podStartE2EDuration="1m23.430819459s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:32.430221494 +0000 UTC m=+101.684524855" watchObservedRunningTime="2025-12-11 16:02:32.430819459 +0000 UTC m=+101.685122790" Dec 11 16:02:33 crc kubenswrapper[5120]: I1211 16:02:33.392222 5120 generic.go:358] "Generic (PLEG): container finished" podID="599e97f6-aab8-4d0f-8d66-720ca1f0756b" containerID="677b63e3415eba51460160d046e91e24039ac075d12b52f0e6ae643cd1d6df14" exitCode=0 Dec 11 16:02:33 crc kubenswrapper[5120]: I1211 16:02:33.392304 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" event={"ID":"599e97f6-aab8-4d0f-8d66-720ca1f0756b","Type":"ContainerDied","Data":"677b63e3415eba51460160d046e91e24039ac075d12b52f0e6ae643cd1d6df14"} Dec 11 16:02:33 crc kubenswrapper[5120]: I1211 16:02:33.396138 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerStarted","Data":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} Dec 11 16:02:34 crc kubenswrapper[5120]: I1211 16:02:34.021562 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:34 crc kubenswrapper[5120]: E1211 16:02:34.022018 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:02:34 crc kubenswrapper[5120]: I1211 16:02:34.021752 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:34 crc kubenswrapper[5120]: E1211 16:02:34.022269 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ccl9q" podUID="f1d42362-2047-47d8-b096-bd9f85606eeb" Dec 11 16:02:34 crc kubenswrapper[5120]: I1211 16:02:34.021746 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:34 crc kubenswrapper[5120]: E1211 16:02:34.022510 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:02:34 crc kubenswrapper[5120]: I1211 16:02:34.021795 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:34 crc kubenswrapper[5120]: E1211 16:02:34.022740 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:02:34 crc kubenswrapper[5120]: I1211 16:02:34.402264 5120 generic.go:358] "Generic (PLEG): container finished" podID="599e97f6-aab8-4d0f-8d66-720ca1f0756b" containerID="3c5a2bfb58cb339251d262e4c8c675d8444f8acd714b8217fcf6c731fde2c4ff" exitCode=0 Dec 11 16:02:34 crc kubenswrapper[5120]: I1211 16:02:34.402341 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" event={"ID":"599e97f6-aab8-4d0f-8d66-720ca1f0756b","Type":"ContainerDied","Data":"3c5a2bfb58cb339251d262e4c8c675d8444f8acd714b8217fcf6c731fde2c4ff"} Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.021301 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.021940 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ccl9q" podUID="f1d42362-2047-47d8-b096-bd9f85606eeb" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.021365 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.022025 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.021301 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.021418 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.022088 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.022221 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.062978 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.063030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.063053 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063139 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063179 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063218 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063231 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063275 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063233 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:44.06321955 +0000 UTC m=+113.317522881 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063301 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063325 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.063335 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063376 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:44.063363434 +0000 UTC m=+113.317666785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063410 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063419 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:44.063401035 +0000 UTC m=+113.317704416 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.063446 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:02:44.063438726 +0000 UTC m=+113.317742047 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.165002 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.165124 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.165243 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:44.165211173 +0000 UTC m=+113.419514524 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.165354 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: E1211 16:02:36.165415 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs podName:f1d42362-2047-47d8-b096-bd9f85606eeb nodeName:}" failed. No retries permitted until 2025-12-11 16:02:44.165403878 +0000 UTC m=+113.419707309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs") pod "network-metrics-daemon-ccl9q" (UID: "f1d42362-2047-47d8-b096-bd9f85606eeb") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.413044 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerStarted","Data":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.413481 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.413529 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.418298 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" event={"ID":"599e97f6-aab8-4d0f-8d66-720ca1f0756b","Type":"ContainerStarted","Data":"39a69c0ab24b8391039bd7a32aff51375bbb3ee6b59e433afdcefaab26c5c141"} Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.436411 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.436865 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" podStartSLOduration=87.436849956 podStartE2EDuration="1m27.436849956s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:36.436416695 +0000 UTC m=+105.690720026" watchObservedRunningTime="2025-12-11 16:02:36.436849956 +0000 UTC m=+105.691153307" Dec 11 16:02:36 crc kubenswrapper[5120]: I1211 16:02:36.506573 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xmwrh" podStartSLOduration=87.506544114 podStartE2EDuration="1m27.506544114s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:36.504174284 +0000 UTC m=+105.758477645" watchObservedRunningTime="2025-12-11 16:02:36.506544114 +0000 UTC m=+105.760847495" Dec 11 16:02:37 crc kubenswrapper[5120]: I1211 16:02:37.420975 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:37 crc kubenswrapper[5120]: I1211 16:02:37.441204 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:02:37 crc kubenswrapper[5120]: I1211 16:02:37.470782 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ccl9q"] Dec 11 16:02:37 crc kubenswrapper[5120]: I1211 16:02:37.470896 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:37 crc kubenswrapper[5120]: E1211 16:02:37.470976 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ccl9q" podUID="f1d42362-2047-47d8-b096-bd9f85606eeb" Dec 11 16:02:38 crc kubenswrapper[5120]: I1211 16:02:38.020799 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:38 crc kubenswrapper[5120]: E1211 16:02:38.020902 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:02:38 crc kubenswrapper[5120]: I1211 16:02:38.020812 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:38 crc kubenswrapper[5120]: E1211 16:02:38.021046 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:02:38 crc kubenswrapper[5120]: I1211 16:02:38.021358 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:38 crc kubenswrapper[5120]: E1211 16:02:38.021499 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:02:39 crc kubenswrapper[5120]: I1211 16:02:39.025738 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:39 crc kubenswrapper[5120]: E1211 16:02:39.025866 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ccl9q" podUID="f1d42362-2047-47d8-b096-bd9f85606eeb" Dec 11 16:02:39 crc kubenswrapper[5120]: I1211 16:02:39.026813 5120 scope.go:117] "RemoveContainer" containerID="7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad" Dec 11 16:02:39 crc kubenswrapper[5120]: E1211 16:02:39.027062 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 11 16:02:40 crc kubenswrapper[5120]: I1211 16:02:40.021206 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:40 crc kubenswrapper[5120]: E1211 16:02:40.021620 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:02:40 crc kubenswrapper[5120]: I1211 16:02:40.021759 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:40 crc kubenswrapper[5120]: E1211 16:02:40.021902 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:02:40 crc kubenswrapper[5120]: I1211 16:02:40.022189 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:40 crc kubenswrapper[5120]: E1211 16:02:40.022656 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:02:41 crc kubenswrapper[5120]: I1211 16:02:41.025102 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:41 crc kubenswrapper[5120]: E1211 16:02:41.025245 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ccl9q" podUID="f1d42362-2047-47d8-b096-bd9f85606eeb" Dec 11 16:02:42 crc kubenswrapper[5120]: I1211 16:02:42.021575 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:42 crc kubenswrapper[5120]: E1211 16:02:42.021708 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 11 16:02:42 crc kubenswrapper[5120]: I1211 16:02:42.021968 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:42 crc kubenswrapper[5120]: E1211 16:02:42.022200 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 11 16:02:42 crc kubenswrapper[5120]: I1211 16:02:42.022207 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:42 crc kubenswrapper[5120]: E1211 16:02:42.022359 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 11 16:02:42 crc kubenswrapper[5120]: I1211 16:02:42.814584 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 11 16:02:42 crc kubenswrapper[5120]: I1211 16:02:42.815728 5120 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 11 16:02:42 crc kubenswrapper[5120]: I1211 16:02:42.853693 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-pv79n"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.118697 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.118752 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.118792 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.118838 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.118923 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119008 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119027 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119038 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119101 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119110 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119215 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119242 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119016 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:03:00.118996248 +0000 UTC m=+129.373299579 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119350 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-11 16:03:00.119317626 +0000 UTC m=+129.373620997 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119387 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-11 16:03:00.119373358 +0000 UTC m=+129.373676719 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.119413 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-11 16:03:00.119397148 +0000 UTC m=+129.373700619 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.220297 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.220447 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:03:00.220411327 +0000 UTC m=+129.474714658 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.220535 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.220654 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: E1211 16:02:44.220706 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs podName:f1d42362-2047-47d8-b096-bd9f85606eeb nodeName:}" failed. No retries permitted until 2025-12-11 16:03:00.220695354 +0000 UTC m=+129.474998685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs") pod "network-metrics-daemon-ccl9q" (UID: "f1d42362-2047-47d8-b096-bd9f85606eeb") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.510062 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.510526 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.515206 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.517581 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.517804 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.517872 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.518308 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.518368 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.518480 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.518547 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.518756 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.518793 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.518760 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.518910 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.519050 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.519203 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.519327 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.519502 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.519524 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.520101 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.520322 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.523893 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/afce2ce8-429d-4f15-b746-a6de58cd6246-audit-policies\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.523921 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/afce2ce8-429d-4f15-b746-a6de58cd6246-etcd-serving-ca\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.523969 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/afce2ce8-429d-4f15-b746-a6de58cd6246-encryption-config\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.523985 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afce2ce8-429d-4f15-b746-a6de58cd6246-audit-dir\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.524001 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5sg8\" (UniqueName: \"kubernetes.io/projected/afce2ce8-429d-4f15-b746-a6de58cd6246-kube-api-access-q5sg8\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.524025 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afce2ce8-429d-4f15-b746-a6de58cd6246-serving-cert\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.524047 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afce2ce8-429d-4f15-b746-a6de58cd6246-etcd-client\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.524066 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afce2ce8-429d-4f15-b746-a6de58cd6246-trusted-ca-bundle\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.524656 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.524862 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.525049 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.525384 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.529269 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.529544 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.538668 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9jr6t"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.538967 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.541075 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.541252 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.541611 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.541737 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.541866 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.542039 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.542209 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.542350 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.542827 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.542980 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.543091 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.543277 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.543456 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.543514 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.546767 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.553420 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.553550 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.557308 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.558046 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.558210 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.558593 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.558647 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.558615 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.558742 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.558894 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.574556 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.574610 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.575025 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.575254 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.579067 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.579717 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.583959 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.585936 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5r6br"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.586243 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.586480 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.586627 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.588419 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-hfkdh"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.589186 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.591052 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.591055 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.591173 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.591500 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.591827 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.592005 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.593518 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-vl5r9"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.594129 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.594533 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.596035 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.596211 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.596101 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.596610 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.596750 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.596763 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.599578 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.599654 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-t6vqb"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.599886 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.600806 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.601218 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.601420 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.601672 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.601929 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.602265 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.602399 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.602773 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.604610 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-cgkwz"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.605421 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.605567 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.605734 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.605860 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.606198 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.606358 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.606408 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.606792 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.606832 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.607004 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.607617 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.610751 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-v9567"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.610900 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.611052 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.610905 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.611295 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.612377 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.613715 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.614021 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.616684 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.616739 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.616960 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.619867 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.619990 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.619847 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.621050 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.625979 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.625984 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626034 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626004 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626087 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626191 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626268 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626394 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626456 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626447 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7d2c51-135d-4c03-a786-49ddacf80604-config\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626503 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37511e72-cd8c-48b6-ae0d-43cac767eb19-serving-cert\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626531 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzqd\" (UniqueName: \"kubernetes.io/projected/78ef6e28-4ff8-4744-88df-70e2aaaa2873-kube-api-access-qdzqd\") pod \"openshift-apiserver-operator-846cbfc458-z2h5b\" (UID: \"78ef6e28-4ff8-4744-88df-70e2aaaa2873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626553 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-tmp\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626598 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626641 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a9fb027d-1ae2-484c-be51-43df1da17bde-node-pullsecrets\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626696 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-audit\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626744 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626764 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8p5h\" (UniqueName: \"kubernetes.io/projected/a9fb027d-1ae2-484c-be51-43df1da17bde-kube-api-access-p8p5h\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626838 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/afce2ce8-429d-4f15-b746-a6de58cd6246-etcd-serving-ca\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626878 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626907 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d60dd3-8522-4475-a323-6acf4ac1abdc-available-featuregates\") pod \"openshift-config-operator-5777786469-vl5r9\" (UID: \"78d60dd3-8522-4475-a323-6acf4ac1abdc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626923 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37511e72-cd8c-48b6-ae0d-43cac767eb19-config\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626949 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afce2ce8-429d-4f15-b746-a6de58cd6246-audit-dir\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626966 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.626990 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvctm\" (UniqueName: \"kubernetes.io/projected/dddece50-dbb3-4cd0-9102-15371396ab49-kube-api-access-fvctm\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627009 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7xj\" (UniqueName: \"kubernetes.io/projected/a8c6a899-cb63-47a3-b599-77212125f7d9-kube-api-access-nt7xj\") pod \"cluster-samples-operator-6b564684c8-lzsrd\" (UID: \"a8c6a899-cb63-47a3-b599-77212125f7d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627024 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37511e72-cd8c-48b6-ae0d-43cac767eb19-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627045 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-auth-proxy-config\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627061 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78ef6e28-4ff8-4744-88df-70e2aaaa2873-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-z2h5b\" (UID: \"78ef6e28-4ff8-4744-88df-70e2aaaa2873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627050 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afce2ce8-429d-4f15-b746-a6de58cd6246-audit-dir\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627080 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627047 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627107 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627119 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d60dd3-8522-4475-a323-6acf4ac1abdc-serving-cert\") pod \"openshift-config-operator-5777786469-vl5r9\" (UID: \"78d60dd3-8522-4475-a323-6acf4ac1abdc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627174 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jftg8\" (UniqueName: \"kubernetes.io/projected/78d60dd3-8522-4475-a323-6acf4ac1abdc-kube-api-access-jftg8\") pod \"openshift-config-operator-5777786469-vl5r9\" (UID: \"78d60dd3-8522-4475-a323-6acf4ac1abdc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627193 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dddece50-dbb3-4cd0-9102-15371396ab49-audit-dir\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627211 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627230 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78ef6e28-4ff8-4744-88df-70e2aaaa2873-config\") pod \"openshift-apiserver-operator-846cbfc458-z2h5b\" (UID: \"78ef6e28-4ff8-4744-88df-70e2aaaa2873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627246 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c24304d-6405-40ae-b1ce-9d1ed668e39f-config\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627273 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmk49\" (UniqueName: \"kubernetes.io/projected/3c24304d-6405-40ae-b1ce-9d1ed668e39f-kube-api-access-cmk49\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627301 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-config\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627320 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q869b\" (UniqueName: \"kubernetes.io/projected/37511e72-cd8c-48b6-ae0d-43cac767eb19-kube-api-access-q869b\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627341 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5388c205-9818-42bc-b518-455547e8faf1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627358 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-config\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627386 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9fb027d-1ae2-484c-be51-43df1da17bde-audit-dir\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627473 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afce2ce8-429d-4f15-b746-a6de58cd6246-serving-cert\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627511 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-serving-cert\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627530 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a9fb027d-1ae2-484c-be51-43df1da17bde-etcd-client\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.627716 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628011 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/afce2ce8-429d-4f15-b746-a6de58cd6246-etcd-serving-ca\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628126 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afce2ce8-429d-4f15-b746-a6de58cd6246-etcd-client\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628198 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-config\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628225 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmmk\" (UniqueName: \"kubernetes.io/projected/90663dc8-366f-45cb-8db2-8360cdc28f74-kube-api-access-nnmmk\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628245 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628319 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlc8\" (UniqueName: \"kubernetes.io/projected/6f7d2c51-135d-4c03-a786-49ddacf80604-kube-api-access-lwlc8\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628323 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s2npb"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628335 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a9fb027d-1ae2-484c-be51-43df1da17bde-encryption-config\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628389 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628408 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c24304d-6405-40ae-b1ce-9d1ed668e39f-images\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628426 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f7d2c51-135d-4c03-a786-49ddacf80604-serving-cert\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628443 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-config\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628544 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-client-ca\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628569 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/afce2ce8-429d-4f15-b746-a6de58cd6246-audit-policies\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628587 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628602 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37511e72-cd8c-48b6-ae0d-43cac767eb19-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628619 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5388c205-9818-42bc-b518-455547e8faf1-config\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628645 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwcf5\" (UniqueName: \"kubernetes.io/projected/100e0294-a4c4-4f06-88a9-951818ed8a9c-kube-api-access-fwcf5\") pod \"migrator-866fcbc849-xkpsj\" (UID: \"100e0294-a4c4-4f06-88a9-951818ed8a9c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628661 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9fb027d-1ae2-484c-be51-43df1da17bde-serving-cert\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628516 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628722 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-audit-policies\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628758 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628867 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.628890 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-machine-approver-tls\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629198 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c24304d-6405-40ae-b1ce-9d1ed668e39f-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629215 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/afce2ce8-429d-4f15-b746-a6de58cd6246-audit-policies\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629249 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/afce2ce8-429d-4f15-b746-a6de58cd6246-encryption-config\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629282 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5sg8\" (UniqueName: \"kubernetes.io/projected/afce2ce8-429d-4f15-b746-a6de58cd6246-kube-api-access-q5sg8\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629300 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-client-ca\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629334 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/90663dc8-366f-45cb-8db2-8360cdc28f74-tmp\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629350 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f7d2c51-135d-4c03-a786-49ddacf80604-trusted-ca\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629647 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-image-import-ca\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629706 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90663dc8-366f-45cb-8db2-8360cdc28f74-serving-cert\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629742 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a8c6a899-cb63-47a3-b599-77212125f7d9-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-lzsrd\" (UID: \"a8c6a899-cb63-47a3-b599-77212125f7d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629766 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rfq\" (UniqueName: \"kubernetes.io/projected/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-kube-api-access-p8rfq\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629796 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629877 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629928 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629957 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfccw\" (UniqueName: \"kubernetes.io/projected/5388c205-9818-42bc-b518-455547e8faf1-kube-api-access-xfccw\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.629995 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afce2ce8-429d-4f15-b746-a6de58cd6246-trusted-ca-bundle\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.630028 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5xj\" (UniqueName: \"kubernetes.io/projected/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-kube-api-access-zx5xj\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.630050 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5388c205-9818-42bc-b518-455547e8faf1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.630399 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afce2ce8-429d-4f15-b746-a6de58cd6246-trusted-ca-bundle\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.631597 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.632489 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.632529 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.633614 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afce2ce8-429d-4f15-b746-a6de58cd6246-etcd-client\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.635167 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afce2ce8-429d-4f15-b746-a6de58cd6246-serving-cert\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.636798 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/afce2ce8-429d-4f15-b746-a6de58cd6246-encryption-config\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.653329 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.673144 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.693815 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.702999 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xfffw"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.703106 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.712576 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.730781 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-serving-cert\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.730832 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a9fb027d-1ae2-484c-be51-43df1da17bde-etcd-client\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.730979 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-config\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nnmmk\" (UniqueName: \"kubernetes.io/projected/90663dc8-366f-45cb-8db2-8360cdc28f74-kube-api-access-nnmmk\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731049 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731070 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlc8\" (UniqueName: \"kubernetes.io/projected/6f7d2c51-135d-4c03-a786-49ddacf80604-kube-api-access-lwlc8\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731089 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a9fb027d-1ae2-484c-be51-43df1da17bde-encryption-config\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731111 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tj29\" (UniqueName: \"kubernetes.io/projected/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-kube-api-access-9tj29\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731133 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731165 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c24304d-6405-40ae-b1ce-9d1ed668e39f-images\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731185 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f7d2c51-135d-4c03-a786-49ddacf80604-serving-cert\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731200 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-config\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731217 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-client-ca\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731243 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731261 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37511e72-cd8c-48b6-ae0d-43cac767eb19-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731277 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5388c205-9818-42bc-b518-455547e8faf1-config\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731306 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fwcf5\" (UniqueName: \"kubernetes.io/projected/100e0294-a4c4-4f06-88a9-951818ed8a9c-kube-api-access-fwcf5\") pod \"migrator-866fcbc849-xkpsj\" (UID: \"100e0294-a4c4-4f06-88a9-951818ed8a9c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731323 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9fb027d-1ae2-484c-be51-43df1da17bde-serving-cert\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731340 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-metrics-certs\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731362 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-audit-policies\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731383 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731465 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731486 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-machine-approver-tls\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731502 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-config\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731514 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c24304d-6405-40ae-b1ce-9d1ed668e39f-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731550 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-client-ca\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731565 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/90663dc8-366f-45cb-8db2-8360cdc28f74-tmp\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731579 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f7d2c51-135d-4c03-a786-49ddacf80604-trusted-ca\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731600 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-image-import-ca\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731622 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90663dc8-366f-45cb-8db2-8360cdc28f74-serving-cert\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731640 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a8c6a899-cb63-47a3-b599-77212125f7d9-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-lzsrd\" (UID: \"a8c6a899-cb63-47a3-b599-77212125f7d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.731657 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rfq\" (UniqueName: \"kubernetes.io/projected/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-kube-api-access-p8rfq\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732489 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5388c205-9818-42bc-b518-455547e8faf1-config\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732570 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732597 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732625 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732645 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-service-ca-bundle\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732670 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732687 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfccw\" (UniqueName: \"kubernetes.io/projected/5388c205-9818-42bc-b518-455547e8faf1-kube-api-access-xfccw\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732704 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-default-certificate\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732727 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zx5xj\" (UniqueName: \"kubernetes.io/projected/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-kube-api-access-zx5xj\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732744 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5388c205-9818-42bc-b518-455547e8faf1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732765 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7d2c51-135d-4c03-a786-49ddacf80604-config\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732783 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37511e72-cd8c-48b6-ae0d-43cac767eb19-serving-cert\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732810 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdzqd\" (UniqueName: \"kubernetes.io/projected/78ef6e28-4ff8-4744-88df-70e2aaaa2873-kube-api-access-qdzqd\") pod \"openshift-apiserver-operator-846cbfc458-z2h5b\" (UID: \"78ef6e28-4ff8-4744-88df-70e2aaaa2873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732829 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-tmp\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732850 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a9fb027d-1ae2-484c-be51-43df1da17bde-node-pullsecrets\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732865 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-audit\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732883 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732899 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8p5h\" (UniqueName: \"kubernetes.io/projected/a9fb027d-1ae2-484c-be51-43df1da17bde-kube-api-access-p8p5h\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732930 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732955 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d60dd3-8522-4475-a323-6acf4ac1abdc-available-featuregates\") pod \"openshift-config-operator-5777786469-vl5r9\" (UID: \"78d60dd3-8522-4475-a323-6acf4ac1abdc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732972 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37511e72-cd8c-48b6-ae0d-43cac767eb19-config\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.732998 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733028 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fvctm\" (UniqueName: \"kubernetes.io/projected/dddece50-dbb3-4cd0-9102-15371396ab49-kube-api-access-fvctm\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733052 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nt7xj\" (UniqueName: \"kubernetes.io/projected/a8c6a899-cb63-47a3-b599-77212125f7d9-kube-api-access-nt7xj\") pod \"cluster-samples-operator-6b564684c8-lzsrd\" (UID: \"a8c6a899-cb63-47a3-b599-77212125f7d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733071 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37511e72-cd8c-48b6-ae0d-43cac767eb19-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733124 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-auth-proxy-config\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733195 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78ef6e28-4ff8-4744-88df-70e2aaaa2873-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-z2h5b\" (UID: \"78ef6e28-4ff8-4744-88df-70e2aaaa2873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733232 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733253 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d60dd3-8522-4475-a323-6acf4ac1abdc-serving-cert\") pod \"openshift-config-operator-5777786469-vl5r9\" (UID: \"78d60dd3-8522-4475-a323-6acf4ac1abdc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733272 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jftg8\" (UniqueName: \"kubernetes.io/projected/78d60dd3-8522-4475-a323-6acf4ac1abdc-kube-api-access-jftg8\") pod \"openshift-config-operator-5777786469-vl5r9\" (UID: \"78d60dd3-8522-4475-a323-6acf4ac1abdc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733287 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dddece50-dbb3-4cd0-9102-15371396ab49-audit-dir\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733303 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733417 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78ef6e28-4ff8-4744-88df-70e2aaaa2873-config\") pod \"openshift-apiserver-operator-846cbfc458-z2h5b\" (UID: \"78ef6e28-4ff8-4744-88df-70e2aaaa2873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733423 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-audit-policies\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733440 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c24304d-6405-40ae-b1ce-9d1ed668e39f-config\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733471 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cmk49\" (UniqueName: \"kubernetes.io/projected/3c24304d-6405-40ae-b1ce-9d1ed668e39f-kube-api-access-cmk49\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733497 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-config\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733524 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q869b\" (UniqueName: \"kubernetes.io/projected/37511e72-cd8c-48b6-ae0d-43cac767eb19-kube-api-access-q869b\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733538 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-chrjf"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733549 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5388c205-9818-42bc-b518-455547e8faf1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733660 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-config\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733746 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9fb027d-1ae2-484c-be51-43df1da17bde-audit-dir\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733814 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37511e72-cd8c-48b6-ae0d-43cac767eb19-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733697 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9fb027d-1ae2-484c-be51-43df1da17bde-audit-dir\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733897 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-stats-auth\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733986 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/90663dc8-366f-45cb-8db2-8360cdc28f74-tmp\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.734010 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c24304d-6405-40ae-b1ce-9d1ed668e39f-images\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.734525 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c24304d-6405-40ae-b1ce-9d1ed668e39f-config\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.733465 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.734648 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-serving-cert\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.734772 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.734786 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.735852 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dddece50-dbb3-4cd0-9102-15371396ab49-audit-dir\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.735954 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a9fb027d-1ae2-484c-be51-43df1da17bde-encryption-config\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.736305 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.736603 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-auth-proxy-config\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.737356 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.737673 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f7d2c51-135d-4c03-a786-49ddacf80604-trusted-ca\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.737713 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-client-ca\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.738138 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7d2c51-135d-4c03-a786-49ddacf80604-config\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.738680 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c24304d-6405-40ae-b1ce-9d1ed668e39f-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.738904 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-config\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.739003 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78ef6e28-4ff8-4744-88df-70e2aaaa2873-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-z2h5b\" (UID: \"78ef6e28-4ff8-4744-88df-70e2aaaa2873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.739450 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.739475 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5388c205-9818-42bc-b518-455547e8faf1-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.739777 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37511e72-cd8c-48b6-ae0d-43cac767eb19-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.739997 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-tmp\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.740042 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a9fb027d-1ae2-484c-be51-43df1da17bde-node-pullsecrets\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.740287 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.740545 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.740897 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.741032 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78d60dd3-8522-4475-a323-6acf4ac1abdc-available-featuregates\") pod \"openshift-config-operator-5777786469-vl5r9\" (UID: \"78d60dd3-8522-4475-a323-6acf4ac1abdc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.741238 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9fb027d-1ae2-484c-be51-43df1da17bde-serving-cert\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.741297 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.741523 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-client-ca\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.741586 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.741712 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78ef6e28-4ff8-4744-88df-70e2aaaa2873-config\") pod \"openshift-apiserver-operator-846cbfc458-z2h5b\" (UID: \"78ef6e28-4ff8-4744-88df-70e2aaaa2873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.742037 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.742136 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5388c205-9818-42bc-b518-455547e8faf1-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.742488 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a9fb027d-1ae2-484c-be51-43df1da17bde-etcd-client\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.742618 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.743197 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.743332 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90663dc8-366f-45cb-8db2-8360cdc28f74-serving-cert\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.743372 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.743454 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f7d2c51-135d-4c03-a786-49ddacf80604-serving-cert\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.744021 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37511e72-cd8c-48b6-ae0d-43cac767eb19-serving-cert\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.744925 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.744956 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d60dd3-8522-4475-a323-6acf4ac1abdc-serving-cert\") pod \"openshift-config-operator-5777786469-vl5r9\" (UID: \"78d60dd3-8522-4475-a323-6acf4ac1abdc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.745772 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a8c6a899-cb63-47a3-b599-77212125f7d9-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-lzsrd\" (UID: \"a8c6a899-cb63-47a3-b599-77212125f7d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.746221 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.746872 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-machine-approver-tls\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.747729 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37511e72-cd8c-48b6-ae0d-43cac767eb19-config\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.748053 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-config\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.748410 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-image-import-ca\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.748967 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-audit\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.749032 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-config\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.749561 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a9fb027d-1ae2-484c-be51-43df1da17bde-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.752994 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.754714 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.754822 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.760981 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.761293 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.771323 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.771559 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.772894 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.776286 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.776706 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.793105 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.797297 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.797393 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.805879 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.805962 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.809395 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-wvrn4"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.809523 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.812784 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.812996 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.813299 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.819776 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.819861 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.823052 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.823124 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.826667 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.826752 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.830508 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.830606 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.832317 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.834577 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-service-ca-bundle\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.834602 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-default-certificate\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.834660 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-stats-auth\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.834700 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tj29\" (UniqueName: \"kubernetes.io/projected/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-kube-api-access-9tj29\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.834732 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-metrics-certs\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.835776 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-8rdg7"] Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.835892 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.837881 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-default-certificate\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.838356 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-service-ca-bundle\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.838775 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-stats-auth\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.841331 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-metrics-certs\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.853362 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.900114 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.913536 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.932692 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.971082 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5sg8\" (UniqueName: \"kubernetes.io/projected/afce2ce8-429d-4f15-b746-a6de58cd6246-kube-api-access-q5sg8\") pod \"apiserver-8596bd845d-pv79n\" (UID: \"afce2ce8-429d-4f15-b746-a6de58cd6246\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.973601 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 11 16:02:44 crc kubenswrapper[5120]: I1211 16:02:44.994069 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.012654 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.013355 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-8rdg7" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.013941 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.018024 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-q29gs"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.018604 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.023118 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-d5wst"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.023555 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.028683 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.034770 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.034971 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-q2dhg"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.040437 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.040603 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.040708 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-z42wj"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.040707 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.044819 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-sgddk"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.044977 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.047889 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gtbd4"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.048040 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.051185 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-whnqw"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.051279 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gtbd4" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.054923 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5qdss"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.055085 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.060894 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-pv79n"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061009 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5r6br"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061091 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061185 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061305 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061388 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061456 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-cgkwz"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061125 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061522 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9jr6t"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061633 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-chrjf"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061696 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061750 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061801 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-hfkdh"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061869 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-t6vqb"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.061934 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062010 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062074 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-vl5r9"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062127 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062193 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062257 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5qdss"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062310 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062371 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062423 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-wvrn4"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062490 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-z42wj"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062550 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-8rdg7"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062604 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-q29gs"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062656 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062734 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xfffw"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062787 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s2npb"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062863 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062933 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-d5wst"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.062999 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.063074 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.063142 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.063298 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gtbd4"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.063385 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-q2dhg"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.063457 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.070661 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rfq\" (UniqueName: \"kubernetes.io/projected/ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639-kube-api-access-p8rfq\") pod \"machine-approver-54c688565-hp4tr\" (UID: \"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.087282 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlc8\" (UniqueName: \"kubernetes.io/projected/6f7d2c51-135d-4c03-a786-49ddacf80604-kube-api-access-lwlc8\") pod \"console-operator-67c89758df-hfkdh\" (UID: \"6f7d2c51-135d-4c03-a786-49ddacf80604\") " pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.105779 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnmmk\" (UniqueName: \"kubernetes.io/projected/90663dc8-366f-45cb-8db2-8360cdc28f74-kube-api-access-nnmmk\") pod \"route-controller-manager-776cdc94d6-9b4f2\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.129240 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwcf5\" (UniqueName: \"kubernetes.io/projected/100e0294-a4c4-4f06-88a9-951818ed8a9c-kube-api-access-fwcf5\") pod \"migrator-866fcbc849-xkpsj\" (UID: \"100e0294-a4c4-4f06-88a9-951818ed8a9c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.131042 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.150541 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jftg8\" (UniqueName: \"kubernetes.io/projected/78d60dd3-8522-4475-a323-6acf4ac1abdc-kube-api-access-jftg8\") pod \"openshift-config-operator-5777786469-vl5r9\" (UID: \"78d60dd3-8522-4475-a323-6acf4ac1abdc\") " pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.175412 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmk49\" (UniqueName: \"kubernetes.io/projected/3c24304d-6405-40ae-b1ce-9d1ed668e39f-kube-api-access-cmk49\") pod \"machine-api-operator-755bb95488-cgkwz\" (UID: \"3c24304d-6405-40ae-b1ce-9d1ed668e39f\") " pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.186309 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvctm\" (UniqueName: \"kubernetes.io/projected/dddece50-dbb3-4cd0-9102-15371396ab49-kube-api-access-fvctm\") pod \"oauth-openshift-66458b6674-5r6br\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.193489 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.213450 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.225782 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.233537 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.234614 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.254533 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: W1211 16:02:45.257097 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0a66bb_0ca8_4951_bc8b_d9ef6cfe3639.slice/crio-1fa45a76e28dc4cd8e0c9c13ce867621873a4d1ef45dd530617abf0e5c53b54b WatchSource:0}: Error finding container 1fa45a76e28dc4cd8e0c9c13ce867621873a4d1ef45dd530617abf0e5c53b54b: Status 404 returned error can't find the container with id 1fa45a76e28dc4cd8e0c9c13ce867621873a4d1ef45dd530617abf0e5c53b54b Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.277428 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.289276 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.290744 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfccw\" (UniqueName: \"kubernetes.io/projected/5388c205-9818-42bc-b518-455547e8faf1-kube-api-access-xfccw\") pod \"openshift-controller-manager-operator-686468bdd5-jdp2z\" (UID: \"5388c205-9818-42bc-b518-455547e8faf1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.296379 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-pv79n"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.297644 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.302911 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.309012 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx5xj\" (UniqueName: \"kubernetes.io/projected/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-kube-api-access-zx5xj\") pod \"controller-manager-65b6cccf98-9jr6t\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.317639 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.328945 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q869b\" (UniqueName: \"kubernetes.io/projected/37511e72-cd8c-48b6-ae0d-43cac767eb19-kube-api-access-q869b\") pod \"authentication-operator-7f5c659b84-kwtc2\" (UID: \"37511e72-cd8c-48b6-ae0d-43cac767eb19\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.352413 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdzqd\" (UniqueName: \"kubernetes.io/projected/78ef6e28-4ff8-4744-88df-70e2aaaa2873-kube-api-access-qdzqd\") pod \"openshift-apiserver-operator-846cbfc458-z2h5b\" (UID: \"78ef6e28-4ff8-4744-88df-70e2aaaa2873\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.368796 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8p5h\" (UniqueName: \"kubernetes.io/projected/a9fb027d-1ae2-484c-be51-43df1da17bde-kube-api-access-p8p5h\") pod \"apiserver-9ddfb9f55-t6vqb\" (UID: \"a9fb027d-1ae2-484c-be51-43df1da17bde\") " pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.390888 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt7xj\" (UniqueName: \"kubernetes.io/projected/a8c6a899-cb63-47a3-b599-77212125f7d9-kube-api-access-nt7xj\") pod \"cluster-samples-operator-6b564684c8-lzsrd\" (UID: \"a8c6a899-cb63-47a3-b599-77212125f7d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.393935 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.403521 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.417310 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.437856 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.441362 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.455496 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.456328 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" event={"ID":"afce2ce8-429d-4f15-b746-a6de58cd6246","Type":"ContainerStarted","Data":"7c0e1d6bd826e9198de78011ce0ab9bc91fc0fe30f7b6e5a4348ca38c2edf322"} Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.462753 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" event={"ID":"90663dc8-366f-45cb-8db2-8360cdc28f74","Type":"ContainerStarted","Data":"36188ffbc21defcd55e88ffbfb579f2a20a9fc83189088ba1a31aeb7efb12f53"} Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.466090 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" event={"ID":"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639","Type":"ContainerStarted","Data":"1fa45a76e28dc4cd8e0c9c13ce867621873a4d1ef45dd530617abf0e5c53b54b"} Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.474378 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.493011 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.514584 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.533138 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.555680 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-hfkdh"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.556766 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.558957 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.564338 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.571201 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.573410 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.583022 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.593922 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.610585 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.612882 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.633015 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.653767 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.674002 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.698903 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.701290 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5r6br"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.727637 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.728061 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.733288 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.753948 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.770073 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-vl5r9"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.776931 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.792299 5120 request.go:752] "Waited before sending request" delay="1.015357743s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-4gqzj&limit=500&resourceVersion=0" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.794547 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.795409 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9jr6t"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.813365 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 11 16:02:45 crc kubenswrapper[5120]: W1211 16:02:45.821516 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ca3a5e5_2aab_4e5b_8756_2a725e8b3346.slice/crio-8b9b22cd7d8a2bd677b00e24419e8e044902d9713c70902755632bf073ba3ef3 WatchSource:0}: Error finding container 8b9b22cd7d8a2bd677b00e24419e8e044902d9713c70902755632bf073ba3ef3: Status 404 returned error can't find the container with id 8b9b22cd7d8a2bd677b00e24419e8e044902d9713c70902755632bf073ba3ef3 Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.822294 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-cgkwz"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.828578 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.834741 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.852914 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.857486 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.873462 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.880249 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd"] Dec 11 16:02:45 crc kubenswrapper[5120]: W1211 16:02:45.886405 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37511e72_cd8c_48b6_ae0d_43cac767eb19.slice/crio-45a233d460def012071262eb2d19ee1cee8b776a86ffef786d4d59a64e2a950e WatchSource:0}: Error finding container 45a233d460def012071262eb2d19ee1cee8b776a86ffef786d4d59a64e2a950e: Status 404 returned error can't find the container with id 45a233d460def012071262eb2d19ee1cee8b776a86ffef786d4d59a64e2a950e Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.897770 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.913855 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 11 16:02:45 crc kubenswrapper[5120]: W1211 16:02:45.924817 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod100e0294_a4c4_4f06_88a9_951818ed8a9c.slice/crio-d6dae8340ca036173861e92a40367b599ff9e250a213a1b83453696dc59a8f04 WatchSource:0}: Error finding container d6dae8340ca036173861e92a40367b599ff9e250a213a1b83453696dc59a8f04: Status 404 returned error can't find the container with id d6dae8340ca036173861e92a40367b599ff9e250a213a1b83453696dc59a8f04 Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.926479 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-t6vqb"] Dec 11 16:02:45 crc kubenswrapper[5120]: W1211 16:02:45.929514 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9fb027d_1ae2_484c_be51_43df1da17bde.slice/crio-e1a6c6235cacdcbbeb02aa887d669cc008002c16969a14f3df974bd0092adf86 WatchSource:0}: Error finding container e1a6c6235cacdcbbeb02aa887d669cc008002c16969a14f3df974bd0092adf86: Status 404 returned error can't find the container with id e1a6c6235cacdcbbeb02aa887d669cc008002c16969a14f3df974bd0092adf86 Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.932897 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.959612 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.963894 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b"] Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.973828 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 11 16:02:45 crc kubenswrapper[5120]: W1211 16:02:45.991474 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78ef6e28_4ff8_4744_88df_70e2aaaa2873.slice/crio-d9c1bd7cd263728fff5a56e6ca08343081df387825b5df821a40d352ff7925f9 WatchSource:0}: Error finding container d9c1bd7cd263728fff5a56e6ca08343081df387825b5df821a40d352ff7925f9: Status 404 returned error can't find the container with id d9c1bd7cd263728fff5a56e6ca08343081df387825b5df821a40d352ff7925f9 Dec 11 16:02:45 crc kubenswrapper[5120]: I1211 16:02:45.993190 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.013206 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.032821 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.053375 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.073044 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.093891 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.112726 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.133349 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.164934 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.174464 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.192888 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.213100 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.234435 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.253640 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.284508 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.293991 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.313325 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.334568 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.355419 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.393457 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.394249 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tj29\" (UniqueName: \"kubernetes.io/projected/16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240-kube-api-access-9tj29\") pod \"router-default-68cf44c8b8-v9567\" (UID: \"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240\") " pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.414635 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.434703 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.470729 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" event={"ID":"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639","Type":"ContainerStarted","Data":"09dbb9ccfec81f142a5db69bacd8ef7b7f7653fcfe9f8b3649fc7cc798c55c68"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.470768 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" event={"ID":"ee0a66bb-0ca8-4951-bc8b-d9ef6cfe3639","Type":"ContainerStarted","Data":"6145025d131c4d64b2f0b485ca887ba74fd8ce8c831151f4140dcf73a8c6e4bf"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.472199 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" event={"ID":"37511e72-cd8c-48b6-ae0d-43cac767eb19","Type":"ContainerStarted","Data":"79fcb6ffe57f6ddfec5ae9d6b1a1a1eff902dc34eea8018dec0f2d6e4fa88141"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.472222 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" event={"ID":"37511e72-cd8c-48b6-ae0d-43cac767eb19","Type":"ContainerStarted","Data":"45a233d460def012071262eb2d19ee1cee8b776a86ffef786d4d59a64e2a950e"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.473449 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" event={"ID":"5388c205-9818-42bc-b518-455547e8faf1","Type":"ContainerStarted","Data":"651a6629a03eca8fee6586a8085c744485443cdb680ba520c862a021ae80a96c"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.473471 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" event={"ID":"5388c205-9818-42bc-b518-455547e8faf1","Type":"ContainerStarted","Data":"bcb7450ce23f1e17982e2163202ceca660d06de5956a2d2976c86d09fad341c3"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.473968 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.475799 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" event={"ID":"a8c6a899-cb63-47a3-b599-77212125f7d9","Type":"ContainerStarted","Data":"7209d4d8c356d4a0b351daf7424f45b021f9f2b026284486c527931868943b32"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.475848 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" event={"ID":"a8c6a899-cb63-47a3-b599-77212125f7d9","Type":"ContainerStarted","Data":"38ec69c2bbfc1aa1a2ddbc9f817e876e0451c208360d731a01bd489bcd368ed9"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.475862 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" event={"ID":"a8c6a899-cb63-47a3-b599-77212125f7d9","Type":"ContainerStarted","Data":"e2340d80a7bd1fb523f6835b85317454b8924324f9e5f4181b369a72747b3bf2"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.478738 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" event={"ID":"3c24304d-6405-40ae-b1ce-9d1ed668e39f","Type":"ContainerStarted","Data":"536b47c941a56ae953c2ef04f19d6df402e5470a9aff69f8a8a3ef47d23e8661"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.478769 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" event={"ID":"3c24304d-6405-40ae-b1ce-9d1ed668e39f","Type":"ContainerStarted","Data":"127b4cf1818d4e73bbf32cc24abf118bcf81e3dd7bb283059e2d31fbc1eef0a3"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.478781 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" event={"ID":"3c24304d-6405-40ae-b1ce-9d1ed668e39f","Type":"ContainerStarted","Data":"7e9908eaed85d15bb6171bce3079adf46cf3d29515d91c346183d5902683bacb"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.480675 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj" event={"ID":"100e0294-a4c4-4f06-88a9-951818ed8a9c","Type":"ContainerStarted","Data":"e18942d47b604af7057e753e3cf789cba9d90b8c68f6803c955f66a9ea12bb4b"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.480708 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj" event={"ID":"100e0294-a4c4-4f06-88a9-951818ed8a9c","Type":"ContainerStarted","Data":"a11a59913f6e554510dddcd133568cd46707ddb888833d671f72846d7f108299"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.480721 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj" event={"ID":"100e0294-a4c4-4f06-88a9-951818ed8a9c","Type":"ContainerStarted","Data":"d6dae8340ca036173861e92a40367b599ff9e250a213a1b83453696dc59a8f04"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.482332 5120 generic.go:358] "Generic (PLEG): container finished" podID="afce2ce8-429d-4f15-b746-a6de58cd6246" containerID="6b9e3cd2850573be41f9fc6e832d93a6d903ab4b5023eef6195af3632e730d06" exitCode=0 Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.482413 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" event={"ID":"afce2ce8-429d-4f15-b746-a6de58cd6246","Type":"ContainerDied","Data":"6b9e3cd2850573be41f9fc6e832d93a6d903ab4b5023eef6195af3632e730d06"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.483677 5120 generic.go:358] "Generic (PLEG): container finished" podID="a9fb027d-1ae2-484c-be51-43df1da17bde" containerID="a80cbb356bc3829b4502a0b03ee2ad4ffbd1878f9913adf24450d7a88203452d" exitCode=0 Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.484132 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" event={"ID":"a9fb027d-1ae2-484c-be51-43df1da17bde","Type":"ContainerDied","Data":"a80cbb356bc3829b4502a0b03ee2ad4ffbd1878f9913adf24450d7a88203452d"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.484178 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" event={"ID":"a9fb027d-1ae2-484c-be51-43df1da17bde","Type":"ContainerStarted","Data":"e1a6c6235cacdcbbeb02aa887d669cc008002c16969a14f3df974bd0092adf86"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.485959 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" event={"ID":"90663dc8-366f-45cb-8db2-8360cdc28f74","Type":"ContainerStarted","Data":"67f2451e868c6e43ffdff925eb1f02e405c2012369989db507653ffea6a60d53"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.485990 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.488444 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" event={"ID":"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346","Type":"ContainerStarted","Data":"cb981134b091a0de58f54d5143a7c44ab8701e3219c07959b9ef3f927714ff7f"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.488483 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" event={"ID":"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346","Type":"ContainerStarted","Data":"8b9b22cd7d8a2bd677b00e24419e8e044902d9713c70902755632bf073ba3ef3"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.489813 5120 generic.go:358] "Generic (PLEG): container finished" podID="78d60dd3-8522-4475-a323-6acf4ac1abdc" containerID="bf89ade5c4f47ccb02c4b26cb0dbc7d9957bf63315a2bcfbe32ac0d3a648283c" exitCode=0 Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.489992 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" event={"ID":"78d60dd3-8522-4475-a323-6acf4ac1abdc","Type":"ContainerDied","Data":"bf89ade5c4f47ccb02c4b26cb0dbc7d9957bf63315a2bcfbe32ac0d3a648283c"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.490048 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" event={"ID":"78d60dd3-8522-4475-a323-6acf4ac1abdc","Type":"ContainerStarted","Data":"8ece2341be02045cc853690b3e8e7d55eb848a0e6f04b11c6f63dfb5af427e8d"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.490342 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.492191 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" event={"ID":"78ef6e28-4ff8-4744-88df-70e2aaaa2873","Type":"ContainerStarted","Data":"5291459a27bd4c9524cc973d1b632a6a69bdefdfabc889feede99e5b5d11a04e"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.492224 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" event={"ID":"78ef6e28-4ff8-4744-88df-70e2aaaa2873","Type":"ContainerStarted","Data":"d9c1bd7cd263728fff5a56e6ca08343081df387825b5df821a40d352ff7925f9"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.493326 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-hfkdh" event={"ID":"6f7d2c51-135d-4c03-a786-49ddacf80604","Type":"ContainerStarted","Data":"56637552f61e21cc7afbb249b0e407f21e14070c390a1fa8da0748a616f3cb20"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.493351 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-hfkdh" event={"ID":"6f7d2c51-135d-4c03-a786-49ddacf80604","Type":"ContainerStarted","Data":"498d7d3a4cefb1cb0f22ea203fee5c15b8fab56f75656ffadeb766ff0feaf5e3"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.493783 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.493813 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.495500 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" event={"ID":"dddece50-dbb3-4cd0-9102-15371396ab49","Type":"ContainerStarted","Data":"26e6cc7ba408ad86b374acd503dfd7d94e7b5e3942fe8cd753a45abf152628cc"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.495526 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" event={"ID":"dddece50-dbb3-4cd0-9102-15371396ab49","Type":"ContainerStarted","Data":"43a1ab65cf57a58fa3501bda1d8e7e2877860ba335f17c22d2ab43a35d648908"} Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.502285 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.504025 5120 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-5r6br container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.504092 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" podUID="dddece50-dbb3-4cd0-9102-15371396ab49" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.514283 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.524617 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.536523 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.556554 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.579405 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.597532 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.612772 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.616052 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.633167 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.671672 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.676450 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.693042 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.713470 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.754041 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.759889 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-bound-sa-token\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.759933 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a1964f37-5430-4e1f-93aa-0f6e3761cff6-tmp-dir\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.759959 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1964f37-5430-4e1f-93aa-0f6e3761cff6-kube-api-access\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760039 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cfe0ce73-8c24-4494-a66c-54fb1f143400-tmp\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760098 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/cfe0ce73-8c24-4494-a66c-54fb1f143400-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760297 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-trusted-ca\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760326 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/040e966f-334a-4b63-a329-01b73c6817f2-tmp-dir\") pod \"dns-operator-799b87ffcd-xfffw\" (UID: \"040e966f-334a-4b63-a329-01b73c6817f2\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760404 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfe0ce73-8c24-4494-a66c-54fb1f143400-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760441 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760524 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-installation-pull-secrets\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760571 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfe0ce73-8c24-4494-a66c-54fb1f143400-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760595 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cfe0ce73-8c24-4494-a66c-54fb1f143400-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760630 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-certificates\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760653 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1964f37-5430-4e1f-93aa-0f6e3761cff6-config\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760693 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26xk\" (UniqueName: \"kubernetes.io/projected/cfe0ce73-8c24-4494-a66c-54fb1f143400-kube-api-access-w26xk\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760737 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/040e966f-334a-4b63-a329-01b73c6817f2-metrics-tls\") pod \"dns-operator-799b87ffcd-xfffw\" (UID: \"040e966f-334a-4b63-a329-01b73c6817f2\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760815 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1964f37-5430-4e1f-93aa-0f6e3761cff6-serving-cert\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760838 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnq45\" (UniqueName: \"kubernetes.io/projected/040e966f-334a-4b63-a329-01b73c6817f2-kube-api-access-lnq45\") pod \"dns-operator-799b87ffcd-xfffw\" (UID: \"040e966f-334a-4b63-a329-01b73c6817f2\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760878 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpfz9\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-kube-api-access-vpfz9\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760962 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-tls\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.760984 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-ca-trust-extracted\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: E1211 16:02:46.767215 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:47.267197862 +0000 UTC m=+116.521501193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.775576 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.797628 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.809995 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46356: no serving certificate available for the kubelet" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.811351 5120 request.go:752] "Waited before sending request" delay="1.770130743s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.823432 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.834840 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.859425 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862642 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862760 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-trusted-ca\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862795 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b99d695-4aaa-49ee-89f2-4597772b73ed-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862822 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f117d28d-206e-485f-b234-38e2945b1a7a-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862843 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-tmpfs\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862867 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-oauth-serving-cert\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862882 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-mountpoint-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862899 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfvsp\" (UniqueName: \"kubernetes.io/projected/a6223410-b237-49b0-b6a5-50cb4169ba81-kube-api-access-bfvsp\") pod \"kube-storage-version-migrator-operator-565b79b866-x89dk\" (UID: \"a6223410-b237-49b0-b6a5-50cb4169ba81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862914 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09d4a454-c53e-446e-9c58-ace5cef3d494-tmp\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862930 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wtxm\" (UniqueName: \"kubernetes.io/projected/cef0f13d-a538-4de5-be19-28719e4e8bfc-kube-api-access-6wtxm\") pod \"multus-admission-controller-69db94689b-d5wst\" (UID: \"cef0f13d-a538-4de5-be19-28719e4e8bfc\") " pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862945 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/15f819c3-8855-465a-8de2-4bcac9a10708-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862963 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/040e966f-334a-4b63-a329-01b73c6817f2-metrics-tls\") pod \"dns-operator-799b87ffcd-xfffw\" (UID: \"040e966f-334a-4b63-a329-01b73c6817f2\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.862978 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65856\" (UniqueName: \"kubernetes.io/projected/ba200aeb-5dcb-4166-83ed-dc53a459e68f-kube-api-access-65856\") pod \"control-plane-machine-set-operator-75ffdb6fcd-96ggd\" (UID: \"ba200aeb-5dcb-4166-83ed-dc53a459e68f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863005 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1964f37-5430-4e1f-93aa-0f6e3761cff6-serving-cert\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863020 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnq45\" (UniqueName: \"kubernetes.io/projected/040e966f-334a-4b63-a329-01b73c6817f2-kube-api-access-lnq45\") pod \"dns-operator-799b87ffcd-xfffw\" (UID: \"040e966f-334a-4b63-a329-01b73c6817f2\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863034 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-console-serving-cert\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863048 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84f8d245-d500-4435-89b2-4926bedad82c-serving-cert\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863062 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c17d3f5-8df5-444a-aff3-958c7bdf9c04-signing-cabundle\") pod \"service-ca-74545575db-q2dhg\" (UID: \"8c17d3f5-8df5-444a-aff3-958c7bdf9c04\") " pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863078 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vpfz9\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-kube-api-access-vpfz9\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863104 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-console-config\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863123 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-tls\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863139 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-ca-trust-extracted\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863172 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84cc1699-abca-47ca-b641-438429faa1a8-config\") pod \"service-ca-operator-5b9c976747-8bt64\" (UID: \"84cc1699-abca-47ca-b641-438429faa1a8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863186 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24rg\" (UniqueName: \"kubernetes.io/projected/c5af33d5-343a-4149-b690-44b4a97ff385-kube-api-access-x24rg\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863206 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-registration-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863220 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f117d28d-206e-485f-b234-38e2945b1a7a-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863236 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84kf\" (UniqueName: \"kubernetes.io/projected/5d7a2793-71cb-48e4-9f55-527f7b4a7903-kube-api-access-v84kf\") pod \"machine-config-controller-f9cdd68f7-f5tq6\" (UID: \"5d7a2793-71cb-48e4-9f55-527f7b4a7903\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863260 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a1964f37-5430-4e1f-93aa-0f6e3761cff6-tmp-dir\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863277 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f117d28d-206e-485f-b234-38e2945b1a7a-config\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863293 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/135a93ff-5a38-46d6-821f-866964572cf5-cert\") pod \"ingress-canary-gtbd4\" (UID: \"135a93ff-5a38-46d6-821f-866964572cf5\") " pod="openshift-ingress-canary/ingress-canary-gtbd4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863308 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/15f819c3-8855-465a-8de2-4bcac9a10708-srv-cert\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863333 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-bound-sa-token\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863349 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a623919a-d893-4f53-9538-2dc253a63989-config-volume\") pod \"collect-profiles-29424480-4wgwc\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863364 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-service-ca\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863386 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5nl\" (UniqueName: \"kubernetes.io/projected/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-kube-api-access-cx5nl\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863400 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920040aa-0665-4aa8-8f93-dd24feadeef2-config-volume\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863414 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkdg2\" (UniqueName: \"kubernetes.io/projected/135a93ff-5a38-46d6-821f-866964572cf5-kube-api-access-nkdg2\") pod \"ingress-canary-gtbd4\" (UID: \"135a93ff-5a38-46d6-821f-866964572cf5\") " pod="openshift-ingress-canary/ingress-canary-gtbd4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863429 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfdcr\" (UniqueName: \"kubernetes.io/projected/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-kube-api-access-bfdcr\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863446 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cef0f13d-a538-4de5-be19-28719e4e8bfc-webhook-certs\") pod \"multus-admission-controller-69db94689b-d5wst\" (UID: \"cef0f13d-a538-4de5-be19-28719e4e8bfc\") " pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863460 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5nnn\" (UniqueName: \"kubernetes.io/projected/1afcf892-0bad-42b2-9088-3a7c76be334f-kube-api-access-n5nnn\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863525 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-console-oauth-config\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863540 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-plugins-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863572 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c17d3f5-8df5-444a-aff3-958c7bdf9c04-signing-key\") pod \"service-ca-74545575db-q2dhg\" (UID: \"8c17d3f5-8df5-444a-aff3-958c7bdf9c04\") " pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863586 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c5af33d5-343a-4149-b690-44b4a97ff385-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863617 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13193ac4-0614-4822-860d-864860616013-srv-cert\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863634 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/84f8d245-d500-4435-89b2-4926bedad82c-etcd-client\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863650 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg7r7\" (UniqueName: \"kubernetes.io/projected/84f8d245-d500-4435-89b2-4926bedad82c-kube-api-access-hg7r7\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863665 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6223410-b237-49b0-b6a5-50cb4169ba81-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-x89dk\" (UID: \"a6223410-b237-49b0-b6a5-50cb4169ba81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863681 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863708 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/84f8d245-d500-4435-89b2-4926bedad82c-etcd-ca\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863740 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-apiservice-cert\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863756 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92zrl\" (UniqueName: \"kubernetes.io/projected/13193ac4-0614-4822-860d-864860616013-kube-api-access-92zrl\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863770 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-ready\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863791 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/040e966f-334a-4b63-a329-01b73c6817f2-tmp-dir\") pod \"dns-operator-799b87ffcd-xfffw\" (UID: \"040e966f-334a-4b63-a329-01b73c6817f2\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863807 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba200aeb-5dcb-4166-83ed-dc53a459e68f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-96ggd\" (UID: \"ba200aeb-5dcb-4166-83ed-dc53a459e68f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863824 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84cc1699-abca-47ca-b641-438429faa1a8-serving-cert\") pod \"service-ca-operator-5b9c976747-8bt64\" (UID: \"84cc1699-abca-47ca-b641-438429faa1a8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863839 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1afcf892-0bad-42b2-9088-3a7c76be334f-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863868 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfe0ce73-8c24-4494-a66c-54fb1f143400-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863888 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f117d28d-206e-485f-b234-38e2945b1a7a-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863920 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-installation-pull-secrets\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863937 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4w67\" (UniqueName: \"kubernetes.io/projected/15f819c3-8855-465a-8de2-4bcac9a10708-kube-api-access-v4w67\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863952 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a623919a-d893-4f53-9538-2dc253a63989-secret-volume\") pod \"collect-profiles-29424480-4wgwc\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863967 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b99d695-4aaa-49ee-89f2-4597772b73ed-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.863986 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfe0ce73-8c24-4494-a66c-54fb1f143400-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864002 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cfe0ce73-8c24-4494-a66c-54fb1f143400-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864019 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99d695-4aaa-49ee-89f2-4597772b73ed-config\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864039 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-certificates\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864057 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1964f37-5430-4e1f-93aa-0f6e3761cff6-config\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864073 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-trusted-ca-bundle\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864088 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-webhook-cert\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864116 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w26xk\" (UniqueName: \"kubernetes.io/projected/cfe0ce73-8c24-4494-a66c-54fb1f143400-kube-api-access-w26xk\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864133 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrjzf\" (UniqueName: \"kubernetes.io/projected/298a1ff3-c53d-4a3a-b113-b9dab74f54a9-kube-api-access-vrjzf\") pod \"machine-config-server-whnqw\" (UID: \"298a1ff3-c53d-4a3a-b113-b9dab74f54a9\") " pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864229 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/920040aa-0665-4aa8-8f93-dd24feadeef2-tmp-dir\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864256 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864276 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/298a1ff3-c53d-4a3a-b113-b9dab74f54a9-node-bootstrap-token\") pod \"machine-config-server-whnqw\" (UID: \"298a1ff3-c53d-4a3a-b113-b9dab74f54a9\") " pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864294 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhlhm\" (UniqueName: \"kubernetes.io/projected/02544bf6-305f-4419-9bd1-fa1662e6b0bb-kube-api-access-fhlhm\") pod \"package-server-manager-77f986bd66-8mr9f\" (UID: \"02544bf6-305f-4419-9bd1-fa1662e6b0bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:02:46 crc kubenswrapper[5120]: E1211 16:02:46.864354 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:47.364336003 +0000 UTC m=+116.618639334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864389 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1afcf892-0bad-42b2-9088-3a7c76be334f-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864431 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/15f819c3-8855-465a-8de2-4bcac9a10708-tmpfs\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864450 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13193ac4-0614-4822-860d-864860616013-profile-collector-cert\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864473 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lggkz\" (UniqueName: \"kubernetes.io/projected/a623919a-d893-4f53-9538-2dc253a63989-kube-api-access-lggkz\") pod \"collect-profiles-29424480-4wgwc\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864540 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/920040aa-0665-4aa8-8f93-dd24feadeef2-metrics-tls\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864556 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6223410-b237-49b0-b6a5-50cb4169ba81-config\") pod \"kube-storage-version-migrator-operator-565b79b866-x89dk\" (UID: \"a6223410-b237-49b0-b6a5-50cb4169ba81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864581 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/84f8d245-d500-4435-89b2-4926bedad82c-etcd-service-ca\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864600 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d7a2793-71cb-48e4-9f55-527f7b4a7903-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-f5tq6\" (UID: \"5d7a2793-71cb-48e4-9f55-527f7b4a7903\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864616 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d7a2793-71cb-48e4-9f55-527f7b4a7903-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-f5tq6\" (UID: \"5d7a2793-71cb-48e4-9f55-527f7b4a7903\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864635 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1afcf892-0bad-42b2-9088-3a7c76be334f-images\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864655 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2f2f\" (UniqueName: \"kubernetes.io/projected/8c17d3f5-8df5-444a-aff3-958c7bdf9c04-kube-api-access-t2f2f\") pod \"service-ca-74545575db-q2dhg\" (UID: \"8c17d3f5-8df5-444a-aff3-958c7bdf9c04\") " pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864674 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqzmt\" (UniqueName: \"kubernetes.io/projected/726e6606-5b45-4bba-865a-f581e8f6c218-kube-api-access-xqzmt\") pod \"downloads-747b44746d-8rdg7\" (UID: \"726e6606-5b45-4bba-865a-f581e8f6c218\") " pod="openshift-console/downloads-747b44746d-8rdg7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864712 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864737 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1964f37-5430-4e1f-93aa-0f6e3761cff6-kube-api-access\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.864772 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-socket-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865051 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jbt\" (UniqueName: \"kubernetes.io/projected/5d5549cf-9120-4619-8794-574e335d251b-kube-api-access-w8jbt\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865078 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cfe0ce73-8c24-4494-a66c-54fb1f143400-tmp\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865095 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/13193ac4-0614-4822-860d-864860616013-tmpfs\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865114 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/298a1ff3-c53d-4a3a-b113-b9dab74f54a9-certs\") pod \"machine-config-server-whnqw\" (UID: \"298a1ff3-c53d-4a3a-b113-b9dab74f54a9\") " pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865164 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0b99d695-4aaa-49ee-89f2-4597772b73ed-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865182 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rrcc\" (UniqueName: \"kubernetes.io/projected/84cc1699-abca-47ca-b641-438429faa1a8-kube-api-access-5rrcc\") pod \"service-ca-operator-5b9c976747-8bt64\" (UID: \"84cc1699-abca-47ca-b641-438429faa1a8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865208 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/cfe0ce73-8c24-4494-a66c-54fb1f143400-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865299 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f8d245-d500-4435-89b2-4926bedad82c-config\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865386 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865418 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsqws\" (UniqueName: \"kubernetes.io/projected/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-kube-api-access-rsqws\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865459 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/84f8d245-d500-4435-89b2-4926bedad82c-tmp-dir\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865482 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5af33d5-343a-4149-b690-44b4a97ff385-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865510 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h7x8\" (UniqueName: \"kubernetes.io/projected/09d4a454-c53e-446e-9c58-ace5cef3d494-kube-api-access-8h7x8\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865541 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-csi-data-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865568 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/02544bf6-305f-4419-9bd1-fa1662e6b0bb-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-8mr9f\" (UID: \"02544bf6-305f-4419-9bd1-fa1662e6b0bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.865589 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/cfe0ce73-8c24-4494-a66c-54fb1f143400-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.867265 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lbr\" (UniqueName: \"kubernetes.io/projected/920040aa-0665-4aa8-8f93-dd24feadeef2-kube-api-access-q6lbr\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.867295 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5af33d5-343a-4149-b690-44b4a97ff385-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.879885 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/040e966f-334a-4b63-a329-01b73c6817f2-tmp-dir\") pod \"dns-operator-799b87ffcd-xfffw\" (UID: \"040e966f-334a-4b63-a329-01b73c6817f2\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.880583 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cfe0ce73-8c24-4494-a66c-54fb1f143400-tmp\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.881038 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-trusted-ca\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.882069 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfe0ce73-8c24-4494-a66c-54fb1f143400-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.887847 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-tls\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.888251 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-ca-trust-extracted\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.889178 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.893524 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-certificates\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.893704 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfe0ce73-8c24-4494-a66c-54fb1f143400-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.895696 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.898858 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a1964f37-5430-4e1f-93aa-0f6e3761cff6-tmp-dir\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.899370 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1964f37-5430-4e1f-93aa-0f6e3761cff6-config\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.899880 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/040e966f-334a-4b63-a329-01b73c6817f2-metrics-tls\") pod \"dns-operator-799b87ffcd-xfffw\" (UID: \"040e966f-334a-4b63-a329-01b73c6817f2\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.910059 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1964f37-5430-4e1f-93aa-0f6e3761cff6-serving-cert\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.910751 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-installation-pull-secrets\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.912533 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46360: no serving certificate available for the kubelet" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.913475 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.937942 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.938340 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.954581 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.962594 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46374: no serving certificate available for the kubelet" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969047 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1afcf892-0bad-42b2-9088-3a7c76be334f-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969101 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f117d28d-206e-485f-b234-38e2945b1a7a-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969130 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v4w67\" (UniqueName: \"kubernetes.io/projected/15f819c3-8855-465a-8de2-4bcac9a10708-kube-api-access-v4w67\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969163 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a623919a-d893-4f53-9538-2dc253a63989-secret-volume\") pod \"collect-profiles-29424480-4wgwc\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969341 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b99d695-4aaa-49ee-89f2-4597772b73ed-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969419 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99d695-4aaa-49ee-89f2-4597772b73ed-config\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969496 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-trusted-ca-bundle\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969563 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-webhook-cert\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969628 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vrjzf\" (UniqueName: \"kubernetes.io/projected/298a1ff3-c53d-4a3a-b113-b9dab74f54a9-kube-api-access-vrjzf\") pod \"machine-config-server-whnqw\" (UID: \"298a1ff3-c53d-4a3a-b113-b9dab74f54a9\") " pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.969782 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1afcf892-0bad-42b2-9088-3a7c76be334f-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970217 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/920040aa-0665-4aa8-8f93-dd24feadeef2-tmp-dir\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970271 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970293 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/298a1ff3-c53d-4a3a-b113-b9dab74f54a9-node-bootstrap-token\") pod \"machine-config-server-whnqw\" (UID: \"298a1ff3-c53d-4a3a-b113-b9dab74f54a9\") " pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970310 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fhlhm\" (UniqueName: \"kubernetes.io/projected/02544bf6-305f-4419-9bd1-fa1662e6b0bb-kube-api-access-fhlhm\") pod \"package-server-manager-77f986bd66-8mr9f\" (UID: \"02544bf6-305f-4419-9bd1-fa1662e6b0bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970328 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1afcf892-0bad-42b2-9088-3a7c76be334f-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970351 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/15f819c3-8855-465a-8de2-4bcac9a10708-tmpfs\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970366 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13193ac4-0614-4822-860d-864860616013-profile-collector-cert\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970385 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lggkz\" (UniqueName: \"kubernetes.io/projected/a623919a-d893-4f53-9538-2dc253a63989-kube-api-access-lggkz\") pod \"collect-profiles-29424480-4wgwc\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970408 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/920040aa-0665-4aa8-8f93-dd24feadeef2-metrics-tls\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970423 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6223410-b237-49b0-b6a5-50cb4169ba81-config\") pod \"kube-storage-version-migrator-operator-565b79b866-x89dk\" (UID: \"a6223410-b237-49b0-b6a5-50cb4169ba81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970440 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/84f8d245-d500-4435-89b2-4926bedad82c-etcd-service-ca\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970455 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d7a2793-71cb-48e4-9f55-527f7b4a7903-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-f5tq6\" (UID: \"5d7a2793-71cb-48e4-9f55-527f7b4a7903\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970469 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d7a2793-71cb-48e4-9f55-527f7b4a7903-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-f5tq6\" (UID: \"5d7a2793-71cb-48e4-9f55-527f7b4a7903\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970483 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1afcf892-0bad-42b2-9088-3a7c76be334f-images\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970499 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2f2f\" (UniqueName: \"kubernetes.io/projected/8c17d3f5-8df5-444a-aff3-958c7bdf9c04-kube-api-access-t2f2f\") pod \"service-ca-74545575db-q2dhg\" (UID: \"8c17d3f5-8df5-444a-aff3-958c7bdf9c04\") " pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970515 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xqzmt\" (UniqueName: \"kubernetes.io/projected/726e6606-5b45-4bba-865a-f581e8f6c218-kube-api-access-xqzmt\") pod \"downloads-747b44746d-8rdg7\" (UID: \"726e6606-5b45-4bba-865a-f581e8f6c218\") " pod="openshift-console/downloads-747b44746d-8rdg7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970519 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99d695-4aaa-49ee-89f2-4597772b73ed-config\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970536 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970666 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-socket-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970693 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jbt\" (UniqueName: \"kubernetes.io/projected/5d5549cf-9120-4619-8794-574e335d251b-kube-api-access-w8jbt\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970723 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/13193ac4-0614-4822-860d-864860616013-tmpfs\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970746 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/298a1ff3-c53d-4a3a-b113-b9dab74f54a9-certs\") pod \"machine-config-server-whnqw\" (UID: \"298a1ff3-c53d-4a3a-b113-b9dab74f54a9\") " pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970771 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0b99d695-4aaa-49ee-89f2-4597772b73ed-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970798 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rrcc\" (UniqueName: \"kubernetes.io/projected/84cc1699-abca-47ca-b641-438429faa1a8-kube-api-access-5rrcc\") pod \"service-ca-operator-5b9c976747-8bt64\" (UID: \"84cc1699-abca-47ca-b641-438429faa1a8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970827 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f8d245-d500-4435-89b2-4926bedad82c-config\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970854 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970876 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rsqws\" (UniqueName: \"kubernetes.io/projected/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-kube-api-access-rsqws\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970901 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/84f8d245-d500-4435-89b2-4926bedad82c-tmp-dir\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970925 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5af33d5-343a-4149-b690-44b4a97ff385-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970950 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8h7x8\" (UniqueName: \"kubernetes.io/projected/09d4a454-c53e-446e-9c58-ace5cef3d494-kube-api-access-8h7x8\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970975 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-csi-data-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.970999 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/02544bf6-305f-4419-9bd1-fa1662e6b0bb-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-8mr9f\" (UID: \"02544bf6-305f-4419-9bd1-fa1662e6b0bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971038 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q6lbr\" (UniqueName: \"kubernetes.io/projected/920040aa-0665-4aa8-8f93-dd24feadeef2-kube-api-access-q6lbr\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971060 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5af33d5-343a-4149-b690-44b4a97ff385-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971102 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b99d695-4aaa-49ee-89f2-4597772b73ed-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971128 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f117d28d-206e-485f-b234-38e2945b1a7a-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971175 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-tmpfs\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971208 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971242 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-oauth-serving-cert\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971263 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-mountpoint-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971287 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bfvsp\" (UniqueName: \"kubernetes.io/projected/a6223410-b237-49b0-b6a5-50cb4169ba81-kube-api-access-bfvsp\") pod \"kube-storage-version-migrator-operator-565b79b866-x89dk\" (UID: \"a6223410-b237-49b0-b6a5-50cb4169ba81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971308 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09d4a454-c53e-446e-9c58-ace5cef3d494-tmp\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971332 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wtxm\" (UniqueName: \"kubernetes.io/projected/cef0f13d-a538-4de5-be19-28719e4e8bfc-kube-api-access-6wtxm\") pod \"multus-admission-controller-69db94689b-d5wst\" (UID: \"cef0f13d-a538-4de5-be19-28719e4e8bfc\") " pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971362 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/15f819c3-8855-465a-8de2-4bcac9a10708-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971394 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-65856\" (UniqueName: \"kubernetes.io/projected/ba200aeb-5dcb-4166-83ed-dc53a459e68f-kube-api-access-65856\") pod \"control-plane-machine-set-operator-75ffdb6fcd-96ggd\" (UID: \"ba200aeb-5dcb-4166-83ed-dc53a459e68f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971427 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-console-serving-cert\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971450 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84f8d245-d500-4435-89b2-4926bedad82c-serving-cert\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971472 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c17d3f5-8df5-444a-aff3-958c7bdf9c04-signing-cabundle\") pod \"service-ca-74545575db-q2dhg\" (UID: \"8c17d3f5-8df5-444a-aff3-958c7bdf9c04\") " pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971494 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971505 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-console-config\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971539 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84cc1699-abca-47ca-b641-438429faa1a8-config\") pod \"service-ca-operator-5b9c976747-8bt64\" (UID: \"84cc1699-abca-47ca-b641-438429faa1a8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971564 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x24rg\" (UniqueName: \"kubernetes.io/projected/c5af33d5-343a-4149-b690-44b4a97ff385-kube-api-access-x24rg\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971601 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-registration-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971622 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f117d28d-206e-485f-b234-38e2945b1a7a-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971649 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v84kf\" (UniqueName: \"kubernetes.io/projected/5d7a2793-71cb-48e4-9f55-527f7b4a7903-kube-api-access-v84kf\") pod \"machine-config-controller-f9cdd68f7-f5tq6\" (UID: \"5d7a2793-71cb-48e4-9f55-527f7b4a7903\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971679 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f117d28d-206e-485f-b234-38e2945b1a7a-config\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971700 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/135a93ff-5a38-46d6-821f-866964572cf5-cert\") pod \"ingress-canary-gtbd4\" (UID: \"135a93ff-5a38-46d6-821f-866964572cf5\") " pod="openshift-ingress-canary/ingress-canary-gtbd4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971727 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/15f819c3-8855-465a-8de2-4bcac9a10708-srv-cert\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971754 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/920040aa-0665-4aa8-8f93-dd24feadeef2-tmp-dir\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971755 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a623919a-d893-4f53-9538-2dc253a63989-config-volume\") pod \"collect-profiles-29424480-4wgwc\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971804 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-service-ca\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971824 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5nl\" (UniqueName: \"kubernetes.io/projected/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-kube-api-access-cx5nl\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971840 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920040aa-0665-4aa8-8f93-dd24feadeef2-config-volume\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971857 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nkdg2\" (UniqueName: \"kubernetes.io/projected/135a93ff-5a38-46d6-821f-866964572cf5-kube-api-access-nkdg2\") pod \"ingress-canary-gtbd4\" (UID: \"135a93ff-5a38-46d6-821f-866964572cf5\") " pod="openshift-ingress-canary/ingress-canary-gtbd4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971879 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bfdcr\" (UniqueName: \"kubernetes.io/projected/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-kube-api-access-bfdcr\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971895 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cef0f13d-a538-4de5-be19-28719e4e8bfc-webhook-certs\") pod \"multus-admission-controller-69db94689b-d5wst\" (UID: \"cef0f13d-a538-4de5-be19-28719e4e8bfc\") " pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971912 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n5nnn\" (UniqueName: \"kubernetes.io/projected/1afcf892-0bad-42b2-9088-3a7c76be334f-kube-api-access-n5nnn\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971961 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-console-oauth-config\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971980 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-plugins-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.971999 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c17d3f5-8df5-444a-aff3-958c7bdf9c04-signing-key\") pod \"service-ca-74545575db-q2dhg\" (UID: \"8c17d3f5-8df5-444a-aff3-958c7bdf9c04\") " pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972016 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c5af33d5-343a-4149-b690-44b4a97ff385-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972035 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13193ac4-0614-4822-860d-864860616013-srv-cert\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972055 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/84f8d245-d500-4435-89b2-4926bedad82c-etcd-client\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972072 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hg7r7\" (UniqueName: \"kubernetes.io/projected/84f8d245-d500-4435-89b2-4926bedad82c-kube-api-access-hg7r7\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972088 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6223410-b237-49b0-b6a5-50cb4169ba81-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-x89dk\" (UID: \"a6223410-b237-49b0-b6a5-50cb4169ba81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972105 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972129 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/84f8d245-d500-4435-89b2-4926bedad82c-etcd-ca\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972174 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-apiservice-cert\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972199 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-92zrl\" (UniqueName: \"kubernetes.io/projected/13193ac4-0614-4822-860d-864860616013-kube-api-access-92zrl\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972218 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-ready\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972253 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba200aeb-5dcb-4166-83ed-dc53a459e68f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-96ggd\" (UID: \"ba200aeb-5dcb-4166-83ed-dc53a459e68f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972277 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84cc1699-abca-47ca-b641-438429faa1a8-serving-cert\") pod \"service-ca-operator-5b9c976747-8bt64\" (UID: \"84cc1699-abca-47ca-b641-438429faa1a8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.972860 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a623919a-d893-4f53-9538-2dc253a63989-config-volume\") pod \"collect-profiles-29424480-4wgwc\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.973133 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-socket-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.973180 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.974172 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-oauth-serving-cert\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.974431 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0b99d695-4aaa-49ee-89f2-4597772b73ed-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.974439 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-csi-data-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.975247 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f8d245-d500-4435-89b2-4926bedad82c-config\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.975313 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/13193ac4-0614-4822-860d-864860616013-tmpfs\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.975494 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-mountpoint-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.976033 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/84f8d245-d500-4435-89b2-4926bedad82c-etcd-ca\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.980073 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84cc1699-abca-47ca-b641-438429faa1a8-config\") pod \"service-ca-operator-5b9c976747-8bt64\" (UID: \"84cc1699-abca-47ca-b641-438429faa1a8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.980521 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-registration-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.982783 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5d5549cf-9120-4619-8794-574e335d251b-plugins-dir\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.982812 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a623919a-d893-4f53-9538-2dc253a63989-secret-volume\") pod \"collect-profiles-29424480-4wgwc\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.983255 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-tmpfs\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.983313 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8c17d3f5-8df5-444a-aff3-958c7bdf9c04-signing-cabundle\") pod \"service-ca-74545575db-q2dhg\" (UID: \"8c17d3f5-8df5-444a-aff3-958c7bdf9c04\") " pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.983596 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f117d28d-206e-485f-b234-38e2945b1a7a-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.983872 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 11 16:02:46 crc kubenswrapper[5120]: E1211 16:02:46.984075 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:47.484054453 +0000 UTC m=+116.738357784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.984557 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6223410-b237-49b0-b6a5-50cb4169ba81-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-x89dk\" (UID: \"a6223410-b237-49b0-b6a5-50cb4169ba81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.984791 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f117d28d-206e-485f-b234-38e2945b1a7a-config\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.985052 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6223410-b237-49b0-b6a5-50cb4169ba81-config\") pod \"kube-storage-version-migrator-operator-565b79b866-x89dk\" (UID: \"a6223410-b237-49b0-b6a5-50cb4169ba81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.985511 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/15f819c3-8855-465a-8de2-4bcac9a10708-tmpfs\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.988300 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-console-config\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.988677 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-ready\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.988869 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1afcf892-0bad-42b2-9088-3a7c76be334f-images\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.988906 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.989038 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/84f8d245-d500-4435-89b2-4926bedad82c-etcd-service-ca\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.989511 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/13193ac4-0614-4822-860d-864860616013-profile-collector-cert\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.989678 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d7a2793-71cb-48e4-9f55-527f7b4a7903-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-f5tq6\" (UID: \"5d7a2793-71cb-48e4-9f55-527f7b4a7903\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.989692 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/84f8d245-d500-4435-89b2-4926bedad82c-etcd-client\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.989793 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/84f8d245-d500-4435-89b2-4926bedad82c-tmp-dir\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.990088 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-service-ca\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.990925 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09d4a454-c53e-446e-9c58-ace5cef3d494-tmp\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.991379 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-webhook-cert\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.991480 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b99d695-4aaa-49ee-89f2-4597772b73ed-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.991725 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84cc1699-abca-47ca-b641-438429faa1a8-serving-cert\") pod \"service-ca-operator-5b9c976747-8bt64\" (UID: \"84cc1699-abca-47ca-b641-438429faa1a8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.991742 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-trusted-ca-bundle\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.991775 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84f8d245-d500-4435-89b2-4926bedad82c-serving-cert\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.992821 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba200aeb-5dcb-4166-83ed-dc53a459e68f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-96ggd\" (UID: \"ba200aeb-5dcb-4166-83ed-dc53a459e68f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.993852 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.996094 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cef0f13d-a538-4de5-be19-28719e4e8bfc-webhook-certs\") pod \"multus-admission-controller-69db94689b-d5wst\" (UID: \"cef0f13d-a538-4de5-be19-28719e4e8bfc\") " pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.997695 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f117d28d-206e-485f-b234-38e2945b1a7a-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.997782 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/135a93ff-5a38-46d6-821f-866964572cf5-cert\") pod \"ingress-canary-gtbd4\" (UID: \"135a93ff-5a38-46d6-821f-866964572cf5\") " pod="openshift-ingress-canary/ingress-canary-gtbd4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.998100 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1afcf892-0bad-42b2-9088-3a7c76be334f-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.998307 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-console-oauth-config\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.998406 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/13193ac4-0614-4822-860d-864860616013-srv-cert\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:46 crc kubenswrapper[5120]: I1211 16:02:46.998704 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.001087 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/02544bf6-305f-4419-9bd1-fa1662e6b0bb-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-8mr9f\" (UID: \"02544bf6-305f-4419-9bd1-fa1662e6b0bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.001857 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5af33d5-343a-4149-b690-44b4a97ff385-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.002742 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8c17d3f5-8df5-444a-aff3-958c7bdf9c04-signing-key\") pod \"service-ca-74545575db-q2dhg\" (UID: \"8c17d3f5-8df5-444a-aff3-958c7bdf9c04\") " pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.004835 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-apiservice-cert\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.010556 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/15f819c3-8855-465a-8de2-4bcac9a10708-srv-cert\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.010630 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d7a2793-71cb-48e4-9f55-527f7b4a7903-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-f5tq6\" (UID: \"5d7a2793-71cb-48e4-9f55-527f7b4a7903\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.010684 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-console-serving-cert\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.010853 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-hfkdh" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.011079 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c5af33d5-343a-4149-b690-44b4a97ff385-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.011207 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/15f819c3-8855-465a-8de2-4bcac9a10708-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.013543 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.024914 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/298a1ff3-c53d-4a3a-b113-b9dab74f54a9-node-bootstrap-token\") pod \"machine-config-server-whnqw\" (UID: \"298a1ff3-c53d-4a3a-b113-b9dab74f54a9\") " pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.034552 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.052717 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/298a1ff3-c53d-4a3a-b113-b9dab74f54a9-certs\") pod \"machine-config-server-whnqw\" (UID: \"298a1ff3-c53d-4a3a-b113-b9dab74f54a9\") " pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.053485 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.064861 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46382: no serving certificate available for the kubelet" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.076602 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:47 crc kubenswrapper[5120]: E1211 16:02:47.077032 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:47.577015538 +0000 UTC m=+116.831318869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.078006 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.086814 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920040aa-0665-4aa8-8f93-dd24feadeef2-config-volume\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.093636 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.113079 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.126675 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/920040aa-0665-4aa8-8f93-dd24feadeef2-metrics-tls\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.129087 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46398: no serving certificate available for the kubelet" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.174742 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-bound-sa-token\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.178135 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:47 crc kubenswrapper[5120]: E1211 16:02:47.178691 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:47.678674902 +0000 UTC m=+116.932978233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.206666 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1964f37-5430-4e1f-93aa-0f6e3761cff6-kube-api-access\") pod \"kube-apiserver-operator-575994946d-8jspp\" (UID: \"a1964f37-5430-4e1f-93aa-0f6e3761cff6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.222719 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnq45\" (UniqueName: \"kubernetes.io/projected/040e966f-334a-4b63-a329-01b73c6817f2-kube-api-access-lnq45\") pod \"dns-operator-799b87ffcd-xfffw\" (UID: \"040e966f-334a-4b63-a329-01b73c6817f2\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.232686 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w26xk\" (UniqueName: \"kubernetes.io/projected/cfe0ce73-8c24-4494-a66c-54fb1f143400-kube-api-access-w26xk\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.241663 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46400: no serving certificate available for the kubelet" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.257445 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cfe0ce73-8c24-4494-a66c-54fb1f143400-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-x4v9v\" (UID: \"cfe0ce73-8c24-4494-a66c-54fb1f143400\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.268918 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpfz9\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-kube-api-access-vpfz9\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.279854 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:47 crc kubenswrapper[5120]: E1211 16:02:47.280437 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:47.780417309 +0000 UTC m=+117.034720650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.291144 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhlhm\" (UniqueName: \"kubernetes.io/projected/02544bf6-305f-4419-9bd1-fa1662e6b0bb-kube-api-access-fhlhm\") pod \"package-server-manager-77f986bd66-8mr9f\" (UID: \"02544bf6-305f-4419-9bd1-fa1662e6b0bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.308937 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4w67\" (UniqueName: \"kubernetes.io/projected/15f819c3-8855-465a-8de2-4bcac9a10708-kube-api-access-v4w67\") pod \"catalog-operator-75ff9f647d-cbgwz\" (UID: \"15f819c3-8855-465a-8de2-4bcac9a10708\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.335534 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rrcc\" (UniqueName: \"kubernetes.io/projected/84cc1699-abca-47ca-b641-438429faa1a8-kube-api-access-5rrcc\") pod \"service-ca-operator-5b9c976747-8bt64\" (UID: \"84cc1699-abca-47ca-b641-438429faa1a8\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.352080 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jbt\" (UniqueName: \"kubernetes.io/projected/5d5549cf-9120-4619-8794-574e335d251b-kube-api-access-w8jbt\") pod \"csi-hostpathplugin-z42wj\" (UID: \"5d5549cf-9120-4619-8794-574e335d251b\") " pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.375814 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfvsp\" (UniqueName: \"kubernetes.io/projected/a6223410-b237-49b0-b6a5-50cb4169ba81-kube-api-access-bfvsp\") pod \"kube-storage-version-migrator-operator-565b79b866-x89dk\" (UID: \"a6223410-b237-49b0-b6a5-50cb4169ba81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.381592 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:47 crc kubenswrapper[5120]: E1211 16:02:47.382108 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:47.882090264 +0000 UTC m=+117.136393595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.392784 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5nnn\" (UniqueName: \"kubernetes.io/projected/1afcf892-0bad-42b2-9088-3a7c76be334f-kube-api-access-n5nnn\") pod \"machine-config-operator-67c9d58cbb-9qjgp\" (UID: \"1afcf892-0bad-42b2-9088-3a7c76be334f\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.408509 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkdg2\" (UniqueName: \"kubernetes.io/projected/135a93ff-5a38-46d6-821f-866964572cf5-kube-api-access-nkdg2\") pod \"ingress-canary-gtbd4\" (UID: \"135a93ff-5a38-46d6-821f-866964572cf5\") " pod="openshift-ingress-canary/ingress-canary-gtbd4" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.431416 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.441966 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfdcr\" (UniqueName: \"kubernetes.io/projected/a2d9797c-79c8-41b8-943c-c30d41b5d2ba-kube-api-access-bfdcr\") pod \"packageserver-7d4fc7d867-6fpf4\" (UID: \"a2d9797c-79c8-41b8-943c-c30d41b5d2ba\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.454992 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-92zrl\" (UniqueName: \"kubernetes.io/projected/13193ac4-0614-4822-860d-864860616013-kube-api-access-92zrl\") pod \"olm-operator-5cdf44d969-5jrjd\" (UID: \"13193ac4-0614-4822-860d-864860616013\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.469866 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46406: no serving certificate available for the kubelet" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.472089 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrjzf\" (UniqueName: \"kubernetes.io/projected/298a1ff3-c53d-4a3a-b113-b9dab74f54a9-kube-api-access-vrjzf\") pod \"machine-config-server-whnqw\" (UID: \"298a1ff3-c53d-4a3a-b113-b9dab74f54a9\") " pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.478661 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.482802 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:47 crc kubenswrapper[5120]: E1211 16:02:47.482959 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:47.982938028 +0000 UTC m=+117.237241359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.483392 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:47 crc kubenswrapper[5120]: E1211 16:02:47.483690 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:47.983683657 +0000 UTC m=+117.237986988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.485817 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-z42wj" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.490753 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f117d28d-206e-485f-b234-38e2945b1a7a-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-mvwb4\" (UID: \"f117d28d-206e-485f-b234-38e2945b1a7a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.492539 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.500424 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gtbd4" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.502733 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" event={"ID":"78d60dd3-8522-4475-a323-6acf4ac1abdc","Type":"ContainerStarted","Data":"7316cc41880e3d7f8db5e522e1e2cc4c068972a6a172b4d89c77d78a094acc18"} Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.502863 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.507478 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-whnqw" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.507873 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.509808 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg7r7\" (UniqueName: \"kubernetes.io/projected/84f8d245-d500-4435-89b2-4926bedad82c-kube-api-access-hg7r7\") pod \"etcd-operator-69b85846b6-chrjf\" (UID: \"84f8d245-d500-4435-89b2-4926bedad82c\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.511014 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" event={"ID":"afce2ce8-429d-4f15-b746-a6de58cd6246","Type":"ContainerStarted","Data":"be43cb24a12ffc7fe51db268b8b8c7275c5f8446982e6439237138fbcf51e423"} Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.515543 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.527263 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" event={"ID":"a9fb027d-1ae2-484c-be51-43df1da17bde","Type":"ContainerStarted","Data":"6cdde85fc4c43765a88dcb79fbcceecf6e6a61d2b6abedc0e207a3595996224b"} Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.527304 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" event={"ID":"a9fb027d-1ae2-484c-be51-43df1da17bde","Type":"ContainerStarted","Data":"8302bd905f2380ea05b21fd1072a43d304d81f70fe83c9a3afb8218a539a4ea7"} Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.531487 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.536398 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-v9567" event={"ID":"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240","Type":"ContainerStarted","Data":"6163ff4aaf627cb8cdf6f4b2240882217fd71cf2fc7b9da49b651e8b5f0db444"} Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.536432 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-v9567" event={"ID":"16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240","Type":"ContainerStarted","Data":"46f77820b3c7d65646a3ed1452e29f16aa048ac8e66d6eb1ce2af17afa8b37c8"} Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.536946 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24rg\" (UniqueName: \"kubernetes.io/projected/c5af33d5-343a-4149-b690-44b4a97ff385-kube-api-access-x24rg\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.552023 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h7x8\" (UniqueName: \"kubernetes.io/projected/09d4a454-c53e-446e-9c58-ace5cef3d494-kube-api-access-8h7x8\") pod \"marketplace-operator-547dbd544d-q29gs\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.552452 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.569027 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.575756 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.576429 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqzmt\" (UniqueName: \"kubernetes.io/projected/726e6606-5b45-4bba-865a-f581e8f6c218-kube-api-access-xqzmt\") pod \"downloads-747b44746d-8rdg7\" (UID: \"726e6606-5b45-4bba-865a-f581e8f6c218\") " pod="openshift-console/downloads-747b44746d-8rdg7" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.955807 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.955925 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.956356 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-8rdg7" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.956926 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.957233 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:47 crc kubenswrapper[5120]: E1211 16:02:47.957356 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:48.457340476 +0000 UTC m=+117.711643807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.957425 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.957579 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.957587 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:47 crc kubenswrapper[5120]: E1211 16:02:47.957940 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:48.45792098 +0000 UTC m=+117.712224311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:47 crc kubenswrapper[5120]: I1211 16:02:47.963821 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84kf\" (UniqueName: \"kubernetes.io/projected/5d7a2793-71cb-48e4-9f55-527f7b4a7903-kube-api-access-v84kf\") pod \"machine-config-controller-f9cdd68f7-f5tq6\" (UID: \"5d7a2793-71cb-48e4-9f55-527f7b4a7903\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.006369 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6lbr\" (UniqueName: \"kubernetes.io/projected/920040aa-0665-4aa8-8f93-dd24feadeef2-kube-api-access-q6lbr\") pod \"dns-default-5qdss\" (UID: \"920040aa-0665-4aa8-8f93-dd24feadeef2\") " pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.011783 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsqws\" (UniqueName: \"kubernetes.io/projected/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-kube-api-access-rsqws\") pod \"cni-sysctl-allowlist-ds-sgddk\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.015686 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2f2f\" (UniqueName: \"kubernetes.io/projected/8c17d3f5-8df5-444a-aff3-958c7bdf9c04-kube-api-access-t2f2f\") pod \"service-ca-74545575db-q2dhg\" (UID: \"8c17d3f5-8df5-444a-aff3-958c7bdf9c04\") " pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.017128 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46410: no serving certificate available for the kubelet" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.027479 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-65856\" (UniqueName: \"kubernetes.io/projected/ba200aeb-5dcb-4166-83ed-dc53a459e68f-kube-api-access-65856\") pod \"control-plane-machine-set-operator-75ffdb6fcd-96ggd\" (UID: \"ba200aeb-5dcb-4166-83ed-dc53a459e68f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.028258 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lggkz\" (UniqueName: \"kubernetes.io/projected/a623919a-d893-4f53-9538-2dc253a63989-kube-api-access-lggkz\") pod \"collect-profiles-29424480-4wgwc\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.029691 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5af33d5-343a-4149-b690-44b4a97ff385-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-f4gw6\" (UID: \"c5af33d5-343a-4149-b690-44b4a97ff385\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.029832 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5nl\" (UniqueName: \"kubernetes.io/projected/d3ba2b9b-a777-4c95-bdd8-3feda00275ef-kube-api-access-cx5nl\") pod \"console-64d44f6ddf-wvrn4\" (UID: \"d3ba2b9b-a777-4c95-bdd8-3feda00275ef\") " pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.030194 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b99d695-4aaa-49ee-89f2-4597772b73ed-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-pshw7\" (UID: \"0b99d695-4aaa-49ee-89f2-4597772b73ed\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.034395 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wtxm\" (UniqueName: \"kubernetes.io/projected/cef0f13d-a538-4de5-be19-28719e4e8bfc-kube-api-access-6wtxm\") pod \"multus-admission-controller-69db94689b-d5wst\" (UID: \"cef0f13d-a538-4de5-be19-28719e4e8bfc\") " pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.040619 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.056455 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.072882 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-q2dhg" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.073371 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.073481 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:48.573453835 +0000 UTC m=+117.827757166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.085987 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.086480 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:48.586457073 +0000 UTC m=+117.840760404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.091690 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.114961 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.128278 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.140844 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.148208 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.155167 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.165494 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.206612 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.207027 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:48.707011264 +0000 UTC m=+117.961314595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.307779 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.308115 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:48.808101904 +0000 UTC m=+118.062405235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.408581 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.408953 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:48.908937728 +0000 UTC m=+118.163241059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.511311 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.511619 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.011607398 +0000 UTC m=+118.265910729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.523624 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp"] Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.528787 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.540774 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" event={"ID":"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c","Type":"ContainerStarted","Data":"f5932be806d357f23644514fe9c7aaafef9ef57dafe48a5d6144bf96f0cd370a"} Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.550463 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-whnqw" event={"ID":"298a1ff3-c53d-4a3a-b113-b9dab74f54a9","Type":"ContainerStarted","Data":"831892be235e359c00f7d7daf652f22b3b5a05013edde2aad1a971eff46ba538"} Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.611927 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.613471 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.113452817 +0000 UTC m=+118.367756148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.700870 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46418: no serving certificate available for the kubelet" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.714065 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.714417 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.214398764 +0000 UTC m=+118.468702095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.749410 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-v9567 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:02:48 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 11 16:02:48 crc kubenswrapper[5120]: [+]process-running ok Dec 11 16:02:48 crc kubenswrapper[5120]: healthz check failed Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.749486 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-v9567" podUID="16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.817617 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.818091 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.318074829 +0000 UTC m=+118.572378160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.835225 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v"] Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.838539 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xfffw"] Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.869024 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-z42wj"] Dec 11 16:02:48 crc kubenswrapper[5120]: I1211 16:02:48.919113 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:48 crc kubenswrapper[5120]: E1211 16:02:48.919784 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.419770895 +0000 UTC m=+118.674074226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.024218 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.024865 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.524847325 +0000 UTC m=+118.779150656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.124561 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.125929 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.126187 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.626175151 +0000 UTC m=+118.880478472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.235789 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.236125 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.736107664 +0000 UTC m=+118.990410995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.336801 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.337194 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.837179684 +0000 UTC m=+119.091483015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.347869 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-z2h5b" podStartSLOduration=100.347849813 podStartE2EDuration="1m40.347849813s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.344907439 +0000 UTC m=+118.599210770" watchObservedRunningTime="2025-12-11 16:02:49.347849813 +0000 UTC m=+118.602153144" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.383453 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-kwtc2" podStartSLOduration=100.383435321 podStartE2EDuration="1m40.383435321s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.382937429 +0000 UTC m=+118.637240750" watchObservedRunningTime="2025-12-11 16:02:49.383435321 +0000 UTC m=+118.637738652" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.427072 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" podStartSLOduration=100.427050151 podStartE2EDuration="1m40.427050151s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.424913547 +0000 UTC m=+118.679216878" watchObservedRunningTime="2025-12-11 16:02:49.427050151 +0000 UTC m=+118.681353482" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.437883 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.438016 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.937990597 +0000 UTC m=+119.192293928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.438442 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.438757 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:49.938743286 +0000 UTC m=+119.193046617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.489852 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" podStartSLOduration=100.489829725 podStartE2EDuration="1m40.489829725s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.489719552 +0000 UTC m=+118.744022913" watchObservedRunningTime="2025-12-11 16:02:49.489829725 +0000 UTC m=+118.744133056" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.502617 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.516780 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.525311 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.528175 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-v9567 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:02:49 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 11 16:02:49 crc kubenswrapper[5120]: [+]process-running ok Dec 11 16:02:49 crc kubenswrapper[5120]: healthz check failed Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.528228 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-v9567" podUID="16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.530451 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gtbd4"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.534073 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-8rdg7"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.540074 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.540500 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.040479013 +0000 UTC m=+119.294782344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.544674 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-d5wst"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.547423 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.549234 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.552889 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.554360 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-chrjf"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.554655 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-xkpsj" podStartSLOduration=100.55464689 podStartE2EDuration="1m40.55464689s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.515180295 +0000 UTC m=+118.769483626" watchObservedRunningTime="2025-12-11 16:02:49.55464689 +0000 UTC m=+118.808950221" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.556795 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" podStartSLOduration=100.556785984 podStartE2EDuration="1m40.556785984s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.549345316 +0000 UTC m=+118.803648647" watchObservedRunningTime="2025-12-11 16:02:49.556785984 +0000 UTC m=+118.811089305" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.559446 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.563844 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-q2dhg"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.568250 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" event={"ID":"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c","Type":"ContainerStarted","Data":"b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19"} Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.569605 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" event={"ID":"f117d28d-206e-485f-b234-38e2945b1a7a","Type":"ContainerStarted","Data":"59cfc70d67808627fab6e7ee3ca66488979393c34ce5f5885abbc71f03182668"} Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.570318 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" event={"ID":"040e966f-334a-4b63-a329-01b73c6817f2","Type":"ContainerStarted","Data":"93849479a481dd34451e3d1ccbb81a328bd0644054d99420762b82ffc6538116"} Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.571772 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-q29gs"] Dec 11 16:02:49 crc kubenswrapper[5120]: W1211 16:02:49.575891 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02544bf6_305f_4419_9bd1_fa1662e6b0bb.slice/crio-eaa9f7641de7d91848a2516bab9ddc45a424caef6d768e94c7ccba904e157a43 WatchSource:0}: Error finding container eaa9f7641de7d91848a2516bab9ddc45a424caef6d768e94c7ccba904e157a43: Status 404 returned error can't find the container with id eaa9f7641de7d91848a2516bab9ddc45a424caef6d768e94c7ccba904e157a43 Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.576339 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" event={"ID":"a1964f37-5430-4e1f-93aa-0f6e3761cff6","Type":"ContainerStarted","Data":"5102eb1dc8652618cf14432dab5e44efd21d006fa0b6d4f17b1595a35e273ee1"} Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.576380 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" event={"ID":"a1964f37-5430-4e1f-93aa-0f6e3761cff6","Type":"ContainerStarted","Data":"f97fbbab798656cf553b0300db991f6b115964eea3932f285bb1dfde553b6dd2"} Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.578354 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.585765 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5qdss"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.586305 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-whnqw" event={"ID":"298a1ff3-c53d-4a3a-b113-b9dab74f54a9","Type":"ContainerStarted","Data":"df1c8de68d0977e1e4e0bda921743f775f6dc507b6776955ef95dfd9a9514926"} Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.601202 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.602300 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-lzsrd" podStartSLOduration=100.602279532 podStartE2EDuration="1m40.602279532s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.586077583 +0000 UTC m=+118.840380934" watchObservedRunningTime="2025-12-11 16:02:49.602279532 +0000 UTC m=+118.856582893" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.603209 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.609872 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" event={"ID":"cfe0ce73-8c24-4494-a66c-54fb1f143400","Type":"ContainerStarted","Data":"fa170d36465377163d48ec861cc00ef41ef86ea51820d9da782e39a255cd42ed"} Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.623482 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.623903 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z42wj" event={"ID":"5d5549cf-9120-4619-8794-574e335d251b","Type":"ContainerStarted","Data":"2e98a601e45b77b0ebc82548344ff720da6108bafabd65ac93b300b0854cd509"} Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.631909 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-hfkdh" podStartSLOduration=100.631893399 podStartE2EDuration="1m40.631893399s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.62997515 +0000 UTC m=+118.884278501" watchObservedRunningTime="2025-12-11 16:02:49.631893399 +0000 UTC m=+118.886196730" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.637953 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-wvrn4"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.642776 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.643520 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.143503782 +0000 UTC m=+119.397807113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.661724 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7"] Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.704425 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hp4tr" podStartSLOduration=100.704408328 podStartE2EDuration="1m40.704408328s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.666267786 +0000 UTC m=+118.920571127" watchObservedRunningTime="2025-12-11 16:02:49.704408328 +0000 UTC m=+118.958711659" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.744822 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.745772 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.245756311 +0000 UTC m=+119.500059642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.756017 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-jdp2z" podStartSLOduration=100.755994019 podStartE2EDuration="1m40.755994019s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.755833615 +0000 UTC m=+119.010136946" watchObservedRunningTime="2025-12-11 16:02:49.755994019 +0000 UTC m=+119.010297350" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.759262 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" podStartSLOduration=100.759254092 podStartE2EDuration="1m40.759254092s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.707428464 +0000 UTC m=+118.961731795" watchObservedRunningTime="2025-12-11 16:02:49.759254092 +0000 UTC m=+119.013557443" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.825531 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" podStartSLOduration=99.825511893 podStartE2EDuration="1m39.825511893s" podCreationTimestamp="2025-12-11 16:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.825217416 +0000 UTC m=+119.079520747" watchObservedRunningTime="2025-12-11 16:02:49.825511893 +0000 UTC m=+119.079815224" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.826564 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" podStartSLOduration=99.82655929 podStartE2EDuration="1m39.82655929s" podCreationTimestamp="2025-12-11 16:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.787467413 +0000 UTC m=+119.041770744" watchObservedRunningTime="2025-12-11 16:02:49.82655929 +0000 UTC m=+119.080862621" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.847593 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.847907 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.347895108 +0000 UTC m=+119.602198439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.868964 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-cgkwz" podStartSLOduration=100.868947909 podStartE2EDuration="1m40.868947909s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.868628721 +0000 UTC m=+119.122932052" watchObservedRunningTime="2025-12-11 16:02:49.868947909 +0000 UTC m=+119.123251240" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.914711 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-v9567" podStartSLOduration=100.914696363 podStartE2EDuration="1m40.914696363s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.913418591 +0000 UTC m=+119.167721922" watchObservedRunningTime="2025-12-11 16:02:49.914696363 +0000 UTC m=+119.168999694" Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.949117 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:49 crc kubenswrapper[5120]: E1211 16:02:49.949885 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.44986589 +0000 UTC m=+119.704169221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:49 crc kubenswrapper[5120]: I1211 16:02:49.989026 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-whnqw" podStartSLOduration=7.989005808 podStartE2EDuration="7.989005808s" podCreationTimestamp="2025-12-11 16:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.954553548 +0000 UTC m=+119.208856879" watchObservedRunningTime="2025-12-11 16:02:49.989005808 +0000 UTC m=+119.243309139" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.014562 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46432: no serving certificate available for the kubelet" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.051324 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.051861 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.551826212 +0000 UTC m=+119.806129543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.131664 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.132020 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.149103 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.153917 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.154064 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.65404193 +0000 UTC m=+119.908345261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.154608 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.155044 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.655034865 +0000 UTC m=+119.909338196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.167264 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-8jspp" podStartSLOduration=101.167246853 podStartE2EDuration="1m41.167246853s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:49.990232218 +0000 UTC m=+119.244535569" watchObservedRunningTime="2025-12-11 16:02:50.167246853 +0000 UTC m=+119.421550184" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.256269 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.257686 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.757670244 +0000 UTC m=+120.011973575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.358013 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.358395 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.858381775 +0000 UTC m=+120.112685106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.458686 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.459221 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.959188898 +0000 UTC m=+120.213492219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.461581 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.461865 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:50.961852605 +0000 UTC m=+120.216155926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.530922 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-v9567 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:02:50 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 11 16:02:50 crc kubenswrapper[5120]: [+]process-running ok Dec 11 16:02:50 crc kubenswrapper[5120]: healthz check failed Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.530975 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-v9567" podUID="16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.563693 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.564304 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:51.064281549 +0000 UTC m=+120.318584890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.611172 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.617587 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.643261 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.666088 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.666587 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:51.166573989 +0000 UTC m=+120.420877320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.688456 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" event={"ID":"f117d28d-206e-485f-b234-38e2945b1a7a","Type":"ContainerStarted","Data":"ba7e84035f70a2ff571255aee7b8ee6c79d765ae063ecf9f0519c9000f676812"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.694999 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" event={"ID":"5d7a2793-71cb-48e4-9f55-527f7b4a7903","Type":"ContainerStarted","Data":"7e6d760218d1f3ad8d945ee3999b0a87532997d77eafcee63b3adc1895288759"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.695032 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" event={"ID":"5d7a2793-71cb-48e4-9f55-527f7b4a7903","Type":"ContainerStarted","Data":"729d6b486a924a0cb5f11735ac90ea152984a3938e7023b2f7cb125cd992a0de"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.725896 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-mvwb4" podStartSLOduration=101.725882515 podStartE2EDuration="1m41.725882515s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:50.724832919 +0000 UTC m=+119.979136250" watchObservedRunningTime="2025-12-11 16:02:50.725882515 +0000 UTC m=+119.980185836" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.726680 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" event={"ID":"040e966f-334a-4b63-a329-01b73c6817f2","Type":"ContainerStarted","Data":"2e57d90de7d9ad5735a63ee6b9ef31d305ccaa41e6717dbb1a1893ab855db5ad"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.742607 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" event={"ID":"84f8d245-d500-4435-89b2-4926bedad82c","Type":"ContainerStarted","Data":"6d17ef20ec9e27f3e02c6943d62250b08ffdec15479f611ac786b74f76906d3e"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.755597 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" event={"ID":"c5af33d5-343a-4149-b690-44b4a97ff385","Type":"ContainerStarted","Data":"3efc180c32b02397a910237e34861b473d9a15a152843a81be2977196f8c9a87"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.755633 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" event={"ID":"c5af33d5-343a-4149-b690-44b4a97ff385","Type":"ContainerStarted","Data":"3a64d3b7ba6892854462209e7943eb1e0b227a578d27437c723e43628fd1cfa1"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.759287 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5qdss" event={"ID":"920040aa-0665-4aa8-8f93-dd24feadeef2","Type":"ContainerStarted","Data":"cca9851254cb1d9d0b65ef9bfb0e5c8ce8ef699fbb0d627d5530fd65c34feada"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.767680 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.768785 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:51.268770167 +0000 UTC m=+120.523073498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.824047 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-q2dhg" event={"ID":"8c17d3f5-8df5-444a-aff3-958c7bdf9c04","Type":"ContainerStarted","Data":"b95a2a944ee4b6f49210de17a6353007d7260f084820507dd4a872b10ce0b1ea"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.824089 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-q2dhg" event={"ID":"8c17d3f5-8df5-444a-aff3-958c7bdf9c04","Type":"ContainerStarted","Data":"008631f4a42171c5e46f16f126f29ec427893b39f082a4ee6e1916f0fff34f23"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.871085 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.871669 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:51.371656643 +0000 UTC m=+120.625959974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.875394 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" event={"ID":"02544bf6-305f-4419-9bd1-fa1662e6b0bb","Type":"ContainerStarted","Data":"bcf21454cfd7364a5efe9289f9a168714586654a2bdb79c5055d43b7063b14a4"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.875443 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" event={"ID":"02544bf6-305f-4419-9bd1-fa1662e6b0bb","Type":"ContainerStarted","Data":"eaa9f7641de7d91848a2516bab9ddc45a424caef6d768e94c7ccba904e157a43"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.888438 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" event={"ID":"09d4a454-c53e-446e-9c58-ace5cef3d494","Type":"ContainerStarted","Data":"c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.888483 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" event={"ID":"09d4a454-c53e-446e-9c58-ace5cef3d494","Type":"ContainerStarted","Data":"bddff2d149bf6526b42eb0b69b0f00bfe59d535bc5de44242b6a729722292852"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.889749 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.918637 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" event={"ID":"0b99d695-4aaa-49ee-89f2-4597772b73ed","Type":"ContainerStarted","Data":"324be701a00005c1fea64ad22cef4312734c88f26b2f400402d3f65a1593fe3d"} Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.919113 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-q2dhg" podStartSLOduration=100.91909867 podStartE2EDuration="1m40.91909867s" podCreationTimestamp="2025-12-11 16:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:50.918603607 +0000 UTC m=+120.172906938" watchObservedRunningTime="2025-12-11 16:02:50.91909867 +0000 UTC m=+120.173402001" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.927426 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-q29gs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.927484 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" podUID="09d4a454-c53e-446e-9c58-ace5cef3d494" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 11 16:02:50 crc kubenswrapper[5120]: I1211 16:02:50.973667 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:50 crc kubenswrapper[5120]: E1211 16:02:50.978086 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:51.478066977 +0000 UTC m=+120.732370308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.004099 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" event={"ID":"a623919a-d893-4f53-9538-2dc253a63989","Type":"ContainerStarted","Data":"fbbdb636ad061fb8c09554b00fdaf309990243e50ff393f0845cc62e27e37950"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.004142 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" event={"ID":"a623919a-d893-4f53-9538-2dc253a63989","Type":"ContainerStarted","Data":"68a5c33570cda3f3e50bf95627fff552e7e5279803368de02eab51205635f000"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.035794 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" podStartSLOduration=101.035777213 podStartE2EDuration="1m41.035777213s" podCreationTimestamp="2025-12-11 16:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:51.001226051 +0000 UTC m=+120.255529382" watchObservedRunningTime="2025-12-11 16:02:51.035777213 +0000 UTC m=+120.290080534" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.036683 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" podStartSLOduration=102.036677656 podStartE2EDuration="1m42.036677656s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:51.035434644 +0000 UTC m=+120.289737985" watchObservedRunningTime="2025-12-11 16:02:51.036677656 +0000 UTC m=+120.290980987" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.112366 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:51 crc kubenswrapper[5120]: E1211 16:02:51.120135 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:51.620118561 +0000 UTC m=+120.874421892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.156426 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" event={"ID":"ba200aeb-5dcb-4166-83ed-dc53a459e68f","Type":"ContainerStarted","Data":"140d2a6befa74bddafe2c52fa2fbe620fcabcfca5ef820934953c4634175beb1"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.156488 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" event={"ID":"ba200aeb-5dcb-4166-83ed-dc53a459e68f","Type":"ContainerStarted","Data":"596b599f065a43d64b8ea0304e17c26cb78e61819dd6011d6f647853b7496bff"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.156498 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" event={"ID":"15f819c3-8855-465a-8de2-4bcac9a10708","Type":"ContainerStarted","Data":"0cd9151bd3918e5b964ddd78844e0ec85b31a536c2a80d95da5e2b000492d858"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.156509 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" event={"ID":"15f819c3-8855-465a-8de2-4bcac9a10708","Type":"ContainerStarted","Data":"66c23d00cdbb928837e6e188a1e42ce9325f3b39700025cd5234a99b19cb5371"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.185241 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" event={"ID":"a6223410-b237-49b0-b6a5-50cb4169ba81","Type":"ContainerStarted","Data":"10085ecb835926ad34ae1b572fdaa0c559209bddb8db3d38fda99f11ee3058f3"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.185278 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" event={"ID":"a6223410-b237-49b0-b6a5-50cb4169ba81","Type":"ContainerStarted","Data":"a1249c7599adc1490072b63121156723111ea81c21208e78b13ddbf4c0ac0c68"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.211678 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" event={"ID":"13193ac4-0614-4822-860d-864860616013","Type":"ContainerStarted","Data":"b35e2e5dbca2f75dba1d21ff922f2ccdf86c5279932e3f0ca590817f1afb5677"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.211724 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" event={"ID":"13193ac4-0614-4822-860d-864860616013","Type":"ContainerStarted","Data":"5b2383d27cc41a1278532af15143201825d3f099321af7c58dee990ddd9c3677"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.212618 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.214528 5120 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-5jrjd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.214580 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" podUID="13193ac4-0614-4822-860d-864860616013" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.224346 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:51 crc kubenswrapper[5120]: E1211 16:02:51.232683 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:51.732445354 +0000 UTC m=+120.986748685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.250949 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" event={"ID":"84cc1699-abca-47ca-b641-438429faa1a8","Type":"ContainerStarted","Data":"15683ebc9f69f3b63534eb116807273bbd08e0f6a9e516b223d30485522c6c8f"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.250993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" event={"ID":"84cc1699-abca-47ca-b641-438429faa1a8","Type":"ContainerStarted","Data":"854b69a042a7443e818d6d8c518977f9fc08e2766adf083fb7f0a712f9ba2448"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.264786 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gtbd4" event={"ID":"135a93ff-5a38-46d6-821f-866964572cf5","Type":"ContainerStarted","Data":"1373a62a1ccc93d8bde2189f5e4c860373783dacc64ee0dbc10975c9f26a7849"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.265073 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gtbd4" event={"ID":"135a93ff-5a38-46d6-821f-866964572cf5","Type":"ContainerStarted","Data":"76223dfac9d17a9cacd7b19fe9122c0c9610728c978cef3cb357ab85d51733c6"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.297137 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" event={"ID":"cef0f13d-a538-4de5-be19-28719e4e8bfc","Type":"ContainerStarted","Data":"b08234839eb18d82b1cf8ee1776a3e82894e066552da97e387b06523a7d63051"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.328070 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:51 crc kubenswrapper[5120]: E1211 16:02:51.329219 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:51.829205265 +0000 UTC m=+121.083508596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.337845 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" event={"ID":"cfe0ce73-8c24-4494-a66c-54fb1f143400","Type":"ContainerStarted","Data":"a4b1b43179815c90bf09af3dd8993f8855a7bf28bb7455b499b918676eabef82"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.395642 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" event={"ID":"1afcf892-0bad-42b2-9088-3a7c76be334f","Type":"ContainerStarted","Data":"79827670b868e016271d99e5f211a59dafecc374f561baa03eb5bf33920946e4"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.395682 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" event={"ID":"1afcf892-0bad-42b2-9088-3a7c76be334f","Type":"ContainerStarted","Data":"f4aceb66dac726feb78304a37d0777a9255447e8f64fdd1c32bb16747ccebda6"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.403104 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-8rdg7" event={"ID":"726e6606-5b45-4bba-865a-f581e8f6c218","Type":"ContainerStarted","Data":"3d5f3fb331db36d518785810ec4bfa89b8b2fa74adfcd12c9d9fc826d1c6134a"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.403160 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-8rdg7" event={"ID":"726e6606-5b45-4bba-865a-f581e8f6c218","Type":"ContainerStarted","Data":"5c174b91a4fbd2dc0161843133d7afaba3aa4adcba4ba8b700c99c932bb49709"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.403941 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-8rdg7" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.407182 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-wvrn4" event={"ID":"d3ba2b9b-a777-4c95-bdd8-3feda00275ef","Type":"ContainerStarted","Data":"ee0f55150816e4637bb684fe5572f0fb9b83772950e85f1d4fe5ae5f121d656a"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.407215 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-wvrn4" event={"ID":"d3ba2b9b-a777-4c95-bdd8-3feda00275ef","Type":"ContainerStarted","Data":"c652cee3ca8c2ae47f8aa0ded059b4c4a6f41076ee8cf3ee20627a2a8786d097"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.407389 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-8rdg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.407445 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-8rdg7" podUID="726e6606-5b45-4bba-865a-f581e8f6c218" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.427555 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" event={"ID":"a2d9797c-79c8-41b8-943c-c30d41b5d2ba","Type":"ContainerStarted","Data":"787c19439736cd3471711b9a9d65ebe637f8a9fdbe37bc1c76e5d8170d6376db"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.427598 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" event={"ID":"a2d9797c-79c8-41b8-943c-c30d41b5d2ba","Type":"ContainerStarted","Data":"d0da248563fadf8cd53a5e545d50240923f7da42fba6ce01dcb696c63ac7d86e"} Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.428534 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.430173 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.430667 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:51 crc kubenswrapper[5120]: E1211 16:02:51.432164 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:51.932134642 +0000 UTC m=+121.186437973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.438294 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pv79n" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.448996 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-t6vqb" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.471019 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" podStartSLOduration=102.471002942 podStartE2EDuration="1m42.471002942s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:51.468740095 +0000 UTC m=+120.723043446" watchObservedRunningTime="2025-12-11 16:02:51.471002942 +0000 UTC m=+120.725306273" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.533615 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.533634 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-v9567 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:02:51 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 11 16:02:51 crc kubenswrapper[5120]: [+]process-running ok Dec 11 16:02:51 crc kubenswrapper[5120]: healthz check failed Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.533688 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-v9567" podUID="16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:02:51 crc kubenswrapper[5120]: E1211 16:02:51.534301 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.034285709 +0000 UTC m=+121.288589110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.577210 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.587453 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-vl5r9" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.635889 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:51 crc kubenswrapper[5120]: E1211 16:02:51.637274 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.137259106 +0000 UTC m=+121.391562437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.737720 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:51 crc kubenswrapper[5120]: E1211 16:02:51.738143 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.238128701 +0000 UTC m=+121.492432032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.785318 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" podStartSLOduration=102.785299601 podStartE2EDuration="1m42.785299601s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:51.755681584 +0000 UTC m=+121.009984925" watchObservedRunningTime="2025-12-11 16:02:51.785299601 +0000 UTC m=+121.039602932" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.840994 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:51 crc kubenswrapper[5120]: E1211 16:02:51.841344 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.341326974 +0000 UTC m=+121.595630305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.843691 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" podStartSLOduration=102.843677134 podStartE2EDuration="1m42.843677134s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:51.787547458 +0000 UTC m=+121.041850799" watchObservedRunningTime="2025-12-11 16:02:51.843677134 +0000 UTC m=+121.097980455" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.878020 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-x4v9v" podStartSLOduration=102.878003519 podStartE2EDuration="1m42.878003519s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:51.876901852 +0000 UTC m=+121.131205183" watchObservedRunningTime="2025-12-11 16:02:51.878003519 +0000 UTC m=+121.132306850" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.909487 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" podStartSLOduration=102.909471933 podStartE2EDuration="1m42.909471933s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:51.905693138 +0000 UTC m=+121.159996489" watchObservedRunningTime="2025-12-11 16:02:51.909471933 +0000 UTC m=+121.163775264" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.944501 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:51 crc kubenswrapper[5120]: E1211 16:02:51.944898 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.444849766 +0000 UTC m=+121.699153097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.973880 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-x89dk" podStartSLOduration=102.973864898 podStartE2EDuration="1m42.973864898s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:51.944948248 +0000 UTC m=+121.199251579" watchObservedRunningTime="2025-12-11 16:02:51.973864898 +0000 UTC m=+121.228168229" Dec 11 16:02:51 crc kubenswrapper[5120]: I1211 16:02:51.974782 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-8rdg7" podStartSLOduration=102.974775341 podStartE2EDuration="1m42.974775341s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:51.97235512 +0000 UTC m=+121.226658451" watchObservedRunningTime="2025-12-11 16:02:51.974775341 +0000 UTC m=+121.229078672" Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.045289 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.545272109 +0000 UTC m=+121.799575440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.045317 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.045518 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.045871 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.545863904 +0000 UTC m=+121.800167235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.076217 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" podStartSLOduration=10.076197229 podStartE2EDuration="10.076197229s" podCreationTimestamp="2025-12-11 16:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.071346117 +0000 UTC m=+121.325649448" watchObservedRunningTime="2025-12-11 16:02:52.076197229 +0000 UTC m=+121.330500560" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.137466 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-wvrn4" podStartSLOduration=103.137446994 podStartE2EDuration="1m43.137446994s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.137192948 +0000 UTC m=+121.391496279" watchObservedRunningTime="2025-12-11 16:02:52.137446994 +0000 UTC m=+121.391750325" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.146768 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.147015 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.646985615 +0000 UTC m=+121.901288946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.147436 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.147886 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.647876637 +0000 UTC m=+121.902179968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.218011 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-96ggd" podStartSLOduration=103.217994036 podStartE2EDuration="1m43.217994036s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.214775585 +0000 UTC m=+121.469078916" watchObservedRunningTime="2025-12-11 16:02:52.217994036 +0000 UTC m=+121.472297367" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.248326 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.248588 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.748560297 +0000 UTC m=+122.002863628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.248838 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.249222 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.749212984 +0000 UTC m=+122.003516315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.306062 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" podStartSLOduration=103.306040577 podStartE2EDuration="1m43.306040577s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.2656909 +0000 UTC m=+121.519994231" watchObservedRunningTime="2025-12-11 16:02:52.306040577 +0000 UTC m=+121.560343908" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.308307 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gtbd4" podStartSLOduration=10.308297514 podStartE2EDuration="10.308297514s" podCreationTimestamp="2025-12-11 16:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.295644185 +0000 UTC m=+121.549947516" watchObservedRunningTime="2025-12-11 16:02:52.308297514 +0000 UTC m=+121.562600845" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.349652 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.350006 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.849989946 +0000 UTC m=+122.104293267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.428374 5120 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-6fpf4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.428455 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" podUID="a2d9797c-79c8-41b8-943c-c30d41b5d2ba" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.448734 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" event={"ID":"cef0f13d-a538-4de5-be19-28719e4e8bfc","Type":"ContainerStarted","Data":"e8b693328d4131a1fb96c0c5dc602e83af2b814b41666d65a0ccd5a79f90d902"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.448800 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" event={"ID":"cef0f13d-a538-4de5-be19-28719e4e8bfc","Type":"ContainerStarted","Data":"cc61ef34f44e08752e36fd398d84a01ff193953573ff3351d6184c136ae3eec8"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.451255 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.451612 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:52.951595569 +0000 UTC m=+122.205898900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.458685 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-9qjgp" event={"ID":"1afcf892-0bad-42b2-9088-3a7c76be334f","Type":"ContainerStarted","Data":"2080d9e13e1c70c4fd10432f9e0f5516a9b43e8b54252932df923838b6781eb3"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.470184 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" event={"ID":"5d7a2793-71cb-48e4-9f55-527f7b4a7903","Type":"ContainerStarted","Data":"2e123ecfea7d321acbd116086a9c15d5f9fd0debc2f35c80e43d36ee9e2084a9"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.477568 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-d5wst" podStartSLOduration=103.477553134 podStartE2EDuration="1m43.477553134s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.476355194 +0000 UTC m=+121.730658525" watchObservedRunningTime="2025-12-11 16:02:52.477553134 +0000 UTC m=+121.731856465" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.478306 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" event={"ID":"040e966f-334a-4b63-a329-01b73c6817f2","Type":"ContainerStarted","Data":"cd49abc3fa3bbbc9f5e9403c084da917fa8d0542b1c4c965066b35ff71b56ac6"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.478566 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-8bt64" podStartSLOduration=102.47856171 podStartE2EDuration="1m42.47856171s" podCreationTimestamp="2025-12-11 16:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.35886366 +0000 UTC m=+121.613166991" watchObservedRunningTime="2025-12-11 16:02:52.47856171 +0000 UTC m=+121.732865041" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.483109 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" event={"ID":"84f8d245-d500-4435-89b2-4926bedad82c","Type":"ContainerStarted","Data":"aa13ff240425372f9c9450085eb29258015d906b60e45046d7235fc24735c383"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.488016 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" event={"ID":"c5af33d5-343a-4149-b690-44b4a97ff385","Type":"ContainerStarted","Data":"d8df7750c1c80070af92533a5d8fb767fd8deaf1564d3d02164dacdc243a1a04"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.492863 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5qdss" event={"ID":"920040aa-0665-4aa8-8f93-dd24feadeef2","Type":"ContainerStarted","Data":"78d6978baa12d9f7522d99888850a834d57aee9edf43c773b20660d8dc22f60f"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.492900 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5qdss" event={"ID":"920040aa-0665-4aa8-8f93-dd24feadeef2","Type":"ContainerStarted","Data":"09679eba9fca4bd9f80773c814b7f4c045c85dde3e1b72b25b132c9eda82e2b3"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.493490 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-5qdss" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.501090 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" event={"ID":"02544bf6-305f-4419-9bd1-fa1662e6b0bb","Type":"ContainerStarted","Data":"22b3a9c9ee751c3904ab23ead1af4e149c9bcb860b7c8d45c6b9ea0bc0dd2f67"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.501818 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.504978 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-f5tq6" podStartSLOduration=103.504962316 podStartE2EDuration="1m43.504962316s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.503508109 +0000 UTC m=+121.757811440" watchObservedRunningTime="2025-12-11 16:02:52.504962316 +0000 UTC m=+121.759265647" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.522648 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-pshw7" event={"ID":"0b99d695-4aaa-49ee-89f2-4597772b73ed","Type":"ContainerStarted","Data":"a7fa73d29a4504b6c573737ac218ba254aeb5ac3f3e6351793e2a1bda571193b"} Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.525477 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-q29gs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.525525 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" podUID="09d4a454-c53e-446e-9c58-ace5cef3d494" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.526024 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.526347 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-8rdg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.526393 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-8rdg7" podUID="726e6606-5b45-4bba-865a-f581e8f6c218" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.528866 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-v9567 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:02:52 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 11 16:02:52 crc kubenswrapper[5120]: [+]process-running ok Dec 11 16:02:52 crc kubenswrapper[5120]: healthz check failed Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.528936 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-v9567" podUID="16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.533678 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" podStartSLOduration=103.533656799 podStartE2EDuration="1m43.533656799s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.530404037 +0000 UTC m=+121.784707368" watchObservedRunningTime="2025-12-11 16:02:52.533656799 +0000 UTC m=+121.787960130" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.534961 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cbgwz" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.543853 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-5jrjd" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.552566 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.552755 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.052727641 +0000 UTC m=+122.307030962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.553860 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.568236 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.068221091 +0000 UTC m=+122.322524422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.599047 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-f4gw6" podStartSLOduration=103.599030989 podStartE2EDuration="1m43.599030989s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.570018897 +0000 UTC m=+121.824322228" watchObservedRunningTime="2025-12-11 16:02:52.599030989 +0000 UTC m=+121.853334320" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.599425 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-chrjf" podStartSLOduration=103.599420708 podStartE2EDuration="1m43.599420708s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.597626813 +0000 UTC m=+121.851930144" watchObservedRunningTime="2025-12-11 16:02:52.599420708 +0000 UTC m=+121.853724049" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.612557 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46444: no serving certificate available for the kubelet" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.658177 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.658639 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.158617342 +0000 UTC m=+122.412920673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.679992 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5qdss" podStartSLOduration=9.67996876 podStartE2EDuration="9.67996876s" podCreationTimestamp="2025-12-11 16:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.671904447 +0000 UTC m=+121.926207778" watchObservedRunningTime="2025-12-11 16:02:52.67996876 +0000 UTC m=+121.934272091" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.685041 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-xfffw" podStartSLOduration=103.685012668 podStartE2EDuration="1m43.685012668s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:52.638866033 +0000 UTC m=+121.893169364" watchObservedRunningTime="2025-12-11 16:02:52.685012668 +0000 UTC m=+121.939315989" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.761087 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.761502 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.261478807 +0000 UTC m=+122.515782138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.837592 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-6fpf4" Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.861833 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.862510 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.362494735 +0000 UTC m=+122.616798066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.930361 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-sgddk"] Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.963542 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:52 crc kubenswrapper[5120]: E1211 16:02:52.963952 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.463936844 +0000 UTC m=+122.718240175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:52 crc kubenswrapper[5120]: I1211 16:02:52.975946 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c2744"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.015428 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c2744"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.015647 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.018846 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.065100 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.065341 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpjxp\" (UniqueName: \"kubernetes.io/projected/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-kube-api-access-tpjxp\") pod \"certified-operators-c2744\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.065375 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-catalog-content\") pod \"certified-operators-c2744\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.065405 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-utilities\") pod \"certified-operators-c2744\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.065624 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.565608219 +0000 UTC m=+122.819911550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.166475 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tpjxp\" (UniqueName: \"kubernetes.io/projected/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-kube-api-access-tpjxp\") pod \"certified-operators-c2744\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.166524 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-catalog-content\") pod \"certified-operators-c2744\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.166556 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-utilities\") pod \"certified-operators-c2744\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.166609 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.166907 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.666895234 +0000 UTC m=+122.921198565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.167868 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-catalog-content\") pod \"certified-operators-c2744\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.168063 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-utilities\") pod \"certified-operators-c2744\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.169333 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rj8n4"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.174310 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.179846 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj8n4"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.180443 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.210243 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpjxp\" (UniqueName: \"kubernetes.io/projected/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-kube-api-access-tpjxp\") pod \"certified-operators-c2744\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.267964 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.268014 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.767988864 +0000 UTC m=+123.022292205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.268265 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-utilities\") pod \"community-operators-rj8n4\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.268337 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.268387 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-catalog-content\") pod \"community-operators-rj8n4\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.268525 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2lw5\" (UniqueName: \"kubernetes.io/projected/6ca05d96-1ede-4860-abf0-dda71706ae45-kube-api-access-x2lw5\") pod \"community-operators-rj8n4\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.268768 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.768750153 +0000 UTC m=+123.023053494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.329441 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.369794 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.369972 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-catalog-content\") pod \"community-operators-rj8n4\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.370004 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x2lw5\" (UniqueName: \"kubernetes.io/projected/6ca05d96-1ede-4860-abf0-dda71706ae45-kube-api-access-x2lw5\") pod \"community-operators-rj8n4\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.370143 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-utilities\") pod \"community-operators-rj8n4\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.370657 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-utilities\") pod \"community-operators-rj8n4\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.370734 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.870716936 +0000 UTC m=+123.125020267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.370974 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-catalog-content\") pod \"community-operators-rj8n4\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.375132 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k644q"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.413485 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k644q"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.413688 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.415034 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2lw5\" (UniqueName: \"kubernetes.io/projected/6ca05d96-1ede-4860-abf0-dda71706ae45-kube-api-access-x2lw5\") pod \"community-operators-rj8n4\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.471728 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjql\" (UniqueName: \"kubernetes.io/projected/24c0e236-bb3f-4b08-ba51-b0881c127d94-kube-api-access-vwjql\") pod \"certified-operators-k644q\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.471787 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-catalog-content\") pod \"certified-operators-k644q\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.471817 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.471839 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-utilities\") pod \"certified-operators-k644q\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.472252 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:53.972224316 +0000 UTC m=+123.226542918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.489434 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.530356 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-v9567 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:02:53 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 11 16:02:53 crc kubenswrapper[5120]: [+]process-running ok Dec 11 16:02:53 crc kubenswrapper[5120]: healthz check failed Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.530807 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-v9567" podUID="16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.549373 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z42wj" event={"ID":"5d5549cf-9120-4619-8794-574e335d251b","Type":"ContainerStarted","Data":"23ab8faef763aaa02fce1e3abfc917b22a628f3efd61537c34f221857443042b"} Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.556946 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-8rdg7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.557018 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-8rdg7" podUID="726e6606-5b45-4bba-865a-f581e8f6c218" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.574404 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.578587 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.580268 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.080236601 +0000 UTC m=+123.334539932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.585382 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjql\" (UniqueName: \"kubernetes.io/projected/24c0e236-bb3f-4b08-ba51-b0881c127d94-kube-api-access-vwjql\") pod \"certified-operators-k644q\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.590756 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-catalog-content\") pod \"certified-operators-k644q\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.591201 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-catalog-content\") pod \"certified-operators-k644q\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.591358 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.591610 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.091596808 +0000 UTC m=+123.345900139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.592602 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-utilities\") pod \"certified-operators-k644q\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.594245 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-utilities\") pod \"certified-operators-k644q\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.623092 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r58jk"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.639390 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.643349 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r58jk"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.654880 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjql\" (UniqueName: \"kubernetes.io/projected/24c0e236-bb3f-4b08-ba51-b0881c127d94-kube-api-access-vwjql\") pod \"certified-operators-k644q\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.686251 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.696000 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.696212 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.196191755 +0000 UTC m=+123.450495086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.696589 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.696657 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-catalog-content\") pod \"community-operators-r58jk\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.696968 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.196960725 +0000 UTC m=+123.451264056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.696955 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-utilities\") pod \"community-operators-r58jk\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.697118 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2zqs\" (UniqueName: \"kubernetes.io/projected/d4f9c834-beb5-42f0-895d-eca73b7897e0-kube-api-access-m2zqs\") pod \"community-operators-r58jk\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.702534 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.702733 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.720081 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.720327 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.770247 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.799595 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.799830 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565f3763-d271-4a37-93a4-e17d54bfe62c-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"565f3763-d271-4a37-93a4-e17d54bfe62c\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.799921 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565f3763-d271-4a37-93a4-e17d54bfe62c-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"565f3763-d271-4a37-93a4-e17d54bfe62c\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.799965 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-catalog-content\") pod \"community-operators-r58jk\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.799980 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-utilities\") pod \"community-operators-r58jk\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.800020 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m2zqs\" (UniqueName: \"kubernetes.io/projected/d4f9c834-beb5-42f0-895d-eca73b7897e0-kube-api-access-m2zqs\") pod \"community-operators-r58jk\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.800390 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.300374113 +0000 UTC m=+123.554677444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.800743 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-catalog-content\") pod \"community-operators-r58jk\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.800951 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-utilities\") pod \"community-operators-r58jk\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.843706 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c2744"] Dec 11 16:02:53 crc kubenswrapper[5120]: W1211 16:02:53.866704 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b9fbe5e_2046_431a_af21_9bfbbbecf32b.slice/crio-231e600ae4413dfe25753951051f36664e0c8d38ab97b6911b120c53e44d5c01 WatchSource:0}: Error finding container 231e600ae4413dfe25753951051f36664e0c8d38ab97b6911b120c53e44d5c01: Status 404 returned error can't find the container with id 231e600ae4413dfe25753951051f36664e0c8d38ab97b6911b120c53e44d5c01 Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.901283 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565f3763-d271-4a37-93a4-e17d54bfe62c-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"565f3763-d271-4a37-93a4-e17d54bfe62c\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.901326 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.901384 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565f3763-d271-4a37-93a4-e17d54bfe62c-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"565f3763-d271-4a37-93a4-e17d54bfe62c\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.901403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565f3763-d271-4a37-93a4-e17d54bfe62c-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"565f3763-d271-4a37-93a4-e17d54bfe62c\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:02:53 crc kubenswrapper[5120]: E1211 16:02:53.901703 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.401690219 +0000 UTC m=+123.655993550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.903060 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2zqs\" (UniqueName: \"kubernetes.io/projected/d4f9c834-beb5-42f0-895d-eca73b7897e0-kube-api-access-m2zqs\") pod \"community-operators-r58jk\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:53 crc kubenswrapper[5120]: I1211 16:02:53.943750 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565f3763-d271-4a37-93a4-e17d54bfe62c-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"565f3763-d271-4a37-93a4-e17d54bfe62c\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.002671 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.002830 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.5028054 +0000 UTC m=+123.757108731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.003062 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.003408 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.503400775 +0000 UTC m=+123.757704106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.006320 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj8n4"] Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.022061 5120 scope.go:117] "RemoveContainer" containerID="7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad" Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.042230 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.058273 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.106258 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.107132 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.607094841 +0000 UTC m=+123.861398182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.209081 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.209512 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.709497734 +0000 UTC m=+123.963801065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.284775 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k644q"] Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.320089 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.320897 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.820873744 +0000 UTC m=+124.075177075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.422039 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.422472 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:54.922455607 +0000 UTC m=+124.176758938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.516996 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.523694 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.524065 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.024050139 +0000 UTC m=+124.278353470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.528322 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-v9567 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:02:54 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 11 16:02:54 crc kubenswrapper[5120]: [+]process-running ok Dec 11 16:02:54 crc kubenswrapper[5120]: healthz check failed Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.528376 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-v9567" podUID="16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.553823 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n4" event={"ID":"6ca05d96-1ede-4860-abf0-dda71706ae45","Type":"ContainerStarted","Data":"4c98e0434faeb17e75d4b037a2b07b7d146423f04cf3eefc1b84fe3bb7de8614"} Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.555593 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2744" event={"ID":"4b9fbe5e-2046-431a-af21-9bfbbbecf32b","Type":"ContainerStarted","Data":"231e600ae4413dfe25753951051f36664e0c8d38ab97b6911b120c53e44d5c01"} Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.556865 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"565f3763-d271-4a37-93a4-e17d54bfe62c","Type":"ContainerStarted","Data":"30fa9dfe49ebd4f3a9fe8b66d9a850f7fbab18accd6c3a170991e204701e4566"} Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.559277 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k644q" event={"ID":"24c0e236-bb3f-4b08-ba51-b0881c127d94","Type":"ContainerStarted","Data":"508c04b7bace26227fbe03026b7dd7d6625550c9d336df5daaef210f5962c11d"} Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.605418 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r58jk"] Dec 11 16:02:54 crc kubenswrapper[5120]: W1211 16:02:54.619573 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4f9c834_beb5_42f0_895d_eca73b7897e0.slice/crio-8ed0da74d8c1af1cf5df68fd25e80676a77f7d9a3a5fc489b6836948571e5caf WatchSource:0}: Error finding container 8ed0da74d8c1af1cf5df68fd25e80676a77f7d9a3a5fc489b6836948571e5caf: Status 404 returned error can't find the container with id 8ed0da74d8c1af1cf5df68fd25e80676a77f7d9a3a5fc489b6836948571e5caf Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.625381 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.625703 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.125685783 +0000 UTC m=+124.379989114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.726118 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.726294 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.226264611 +0000 UTC m=+124.480567942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.726948 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.727346 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.227331277 +0000 UTC m=+124.481634608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.828379 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.828569 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.328540651 +0000 UTC m=+124.582843982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.884977 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" podUID="ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" gracePeriod=30 Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.932324 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:54 crc kubenswrapper[5120]: E1211 16:02:54.932661 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.432645977 +0000 UTC m=+124.686949308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.970779 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l2fzs"] Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.977138 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.980711 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 11 16:02:54 crc kubenswrapper[5120]: I1211 16:02:54.980932 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2fzs"] Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.039496 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.039750 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-catalog-content\") pod \"redhat-marketplace-l2fzs\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.039856 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.539833691 +0000 UTC m=+124.794137012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.039975 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf8cz\" (UniqueName: \"kubernetes.io/projected/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-kube-api-access-kf8cz\") pod \"redhat-marketplace-l2fzs\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.040039 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-utilities\") pod \"redhat-marketplace-l2fzs\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.141255 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kf8cz\" (UniqueName: \"kubernetes.io/projected/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-kube-api-access-kf8cz\") pod \"redhat-marketplace-l2fzs\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.141298 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-utilities\") pod \"redhat-marketplace-l2fzs\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.141681 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-utilities\") pod \"redhat-marketplace-l2fzs\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.145253 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.145331 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-catalog-content\") pod \"redhat-marketplace-l2fzs\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.145672 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-catalog-content\") pod \"redhat-marketplace-l2fzs\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.145913 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.645900177 +0000 UTC m=+124.900203508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.173461 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf8cz\" (UniqueName: \"kubernetes.io/projected/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-kube-api-access-kf8cz\") pod \"redhat-marketplace-l2fzs\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.246918 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.247109 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.747082229 +0000 UTC m=+125.001385550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.247401 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.247679 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.747665754 +0000 UTC m=+125.001969085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.347357 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.348250 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.348711 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.848685862 +0000 UTC m=+125.102989193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.351222 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.351911 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.851887563 +0000 UTC m=+125.106190894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.374555 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mpccf"] Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.382721 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.384458 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpccf"] Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.452649 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.452855 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-utilities\") pod \"redhat-marketplace-mpccf\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.452885 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-catalog-content\") pod \"redhat-marketplace-mpccf\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.452924 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxhrr\" (UniqueName: \"kubernetes.io/projected/1bc42a63-9035-4803-98fb-ce63eef24511-kube-api-access-vxhrr\") pod \"redhat-marketplace-mpccf\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.453036 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:55.953020284 +0000 UTC m=+125.207323615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.528359 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-v9567 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 16:02:55 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Dec 11 16:02:55 crc kubenswrapper[5120]: [+]process-running ok Dec 11 16:02:55 crc kubenswrapper[5120]: healthz check failed Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.528420 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-v9567" podUID="16c977f7-e1fc-4a1c-86eb-0dbd0f3d4240" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.554793 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.554893 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-utilities\") pod \"redhat-marketplace-mpccf\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.554916 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-catalog-content\") pod \"redhat-marketplace-mpccf\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.554955 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vxhrr\" (UniqueName: \"kubernetes.io/projected/1bc42a63-9035-4803-98fb-ce63eef24511-kube-api-access-vxhrr\") pod \"redhat-marketplace-mpccf\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.555472 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.055460488 +0000 UTC m=+125.309763819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.555934 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-utilities\") pod \"redhat-marketplace-mpccf\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.556141 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-catalog-content\") pod \"redhat-marketplace-mpccf\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.577885 5120 generic.go:358] "Generic (PLEG): container finished" podID="a623919a-d893-4f53-9538-2dc253a63989" containerID="fbbdb636ad061fb8c09554b00fdaf309990243e50ff393f0845cc62e27e37950" exitCode=0 Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.578023 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" event={"ID":"a623919a-d893-4f53-9538-2dc253a63989","Type":"ContainerDied","Data":"fbbdb636ad061fb8c09554b00fdaf309990243e50ff393f0845cc62e27e37950"} Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.582590 5120 generic.go:358] "Generic (PLEG): container finished" podID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerID="bed6b594111b98b816e74a60bc58558f72a4e56462f624eec3de4106729098b4" exitCode=0 Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.582730 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2744" event={"ID":"4b9fbe5e-2046-431a-af21-9bfbbbecf32b","Type":"ContainerDied","Data":"bed6b594111b98b816e74a60bc58558f72a4e56462f624eec3de4106729098b4"} Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.588600 5120 generic.go:358] "Generic (PLEG): container finished" podID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerID="099b25d20224c0f01ae189ef7af69cea4acb5848938eff4cfc3bd413daea936b" exitCode=0 Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.588719 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r58jk" event={"ID":"d4f9c834-beb5-42f0-895d-eca73b7897e0","Type":"ContainerDied","Data":"099b25d20224c0f01ae189ef7af69cea4acb5848938eff4cfc3bd413daea936b"} Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.588749 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r58jk" event={"ID":"d4f9c834-beb5-42f0-895d-eca73b7897e0","Type":"ContainerStarted","Data":"8ed0da74d8c1af1cf5df68fd25e80676a77f7d9a3a5fc489b6836948571e5caf"} Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.595560 5120 generic.go:358] "Generic (PLEG): container finished" podID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerID="e98c25c78080ed151407b3d248f1f605d6365f08a0f8186c1ecbb4879f7e7bfc" exitCode=0 Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.597568 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k644q" event={"ID":"24c0e236-bb3f-4b08-ba51-b0881c127d94","Type":"ContainerDied","Data":"e98c25c78080ed151407b3d248f1f605d6365f08a0f8186c1ecbb4879f7e7bfc"} Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.613104 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2fzs"] Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.616865 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.617733 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxhrr\" (UniqueName: \"kubernetes.io/projected/1bc42a63-9035-4803-98fb-ce63eef24511-kube-api-access-vxhrr\") pod \"redhat-marketplace-mpccf\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.627784 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824"} Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.628830 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.632810 5120 generic.go:358] "Generic (PLEG): container finished" podID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerID="1ac9fff7ebeb4d7ea33e808856c7cdf16b14fa1bbeabc4f4c741a127d503813f" exitCode=0 Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.633050 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n4" event={"ID":"6ca05d96-1ede-4860-abf0-dda71706ae45","Type":"ContainerDied","Data":"1ac9fff7ebeb4d7ea33e808856c7cdf16b14fa1bbeabc4f4c741a127d503813f"} Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.658617 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.658934 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.158915248 +0000 UTC m=+125.413218569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.670278 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.670637 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.170621023 +0000 UTC m=+125.424924354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.691126 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.69109772 podStartE2EDuration="28.69109772s" podCreationTimestamp="2025-12-11 16:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:55.689604572 +0000 UTC m=+124.943907923" watchObservedRunningTime="2025-12-11 16:02:55.69109772 +0000 UTC m=+124.945401051" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.729351 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.745418 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.771891 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.772322 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.272293398 +0000 UTC m=+125.526596739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.874098 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.874464 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.374450735 +0000 UTC m=+125.628754066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.975098 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.975225 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.475203967 +0000 UTC m=+125.729507298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:55 crc kubenswrapper[5120]: I1211 16:02:55.975435 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:55 crc kubenswrapper[5120]: E1211 16:02:55.975715 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.47570686 +0000 UTC m=+125.730010191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.076623 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.076790 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.576765479 +0000 UTC m=+125.831068810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.077239 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.077584 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.577568069 +0000 UTC m=+125.831871470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: W1211 16:02:56.096833 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bc42a63_9035_4803_98fb_ce63eef24511.slice/crio-364a2d83d50e1aa77c09d2a4a765b9e155119be8e5fdb7f16f4182a8f0942506 WatchSource:0}: Error finding container 364a2d83d50e1aa77c09d2a4a765b9e155119be8e5fdb7f16f4182a8f0942506: Status 404 returned error can't find the container with id 364a2d83d50e1aa77c09d2a4a765b9e155119be8e5fdb7f16f4182a8f0942506 Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.178695 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.179076 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.6790605 +0000 UTC m=+125.933363831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.279858 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.280278 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.780261983 +0000 UTC m=+126.034565314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.363844 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.363898 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpccf"] Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.363958 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.366117 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.366124 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.380898 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.381409 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.881389234 +0000 UTC m=+126.135692565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.482529 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776e23be-93f3-4d06-b329-70b17a179ad4-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"776e23be-93f3-4d06-b329-70b17a179ad4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.482817 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776e23be-93f3-4d06-b329-70b17a179ad4-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"776e23be-93f3-4d06-b329-70b17a179ad4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.482949 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.483339 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:56.983320655 +0000 UTC m=+126.237623986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.525526 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.532260 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.574811 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-58qrd"] Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.584864 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.586197 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.587995 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776e23be-93f3-4d06-b329-70b17a179ad4-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"776e23be-93f3-4d06-b329-70b17a179ad4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.588074 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.088053677 +0000 UTC m=+126.342357008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.588182 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.588285 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776e23be-93f3-4d06-b329-70b17a179ad4-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"776e23be-93f3-4d06-b329-70b17a179ad4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.588536 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776e23be-93f3-4d06-b329-70b17a179ad4-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"776e23be-93f3-4d06-b329-70b17a179ad4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.588740 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.088733584 +0000 UTC m=+126.343036915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.589681 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.591709 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58qrd"] Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.623796 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776e23be-93f3-4d06-b329-70b17a179ad4-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"776e23be-93f3-4d06-b329-70b17a179ad4\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.643283 5120 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.653762 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z42wj" event={"ID":"5d5549cf-9120-4619-8794-574e335d251b","Type":"ContainerStarted","Data":"cce29b1caf29e28e9c13cf901f72bf92fcb196ef057e4338e9cabfd629edaeeb"} Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.665890 5120 generic.go:358] "Generic (PLEG): container finished" podID="1bc42a63-9035-4803-98fb-ce63eef24511" containerID="112d5a9188a91972d01742947849c864a49a8f35e9e1f551246d95a7ed420ac0" exitCode=0 Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.666080 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpccf" event={"ID":"1bc42a63-9035-4803-98fb-ce63eef24511","Type":"ContainerDied","Data":"112d5a9188a91972d01742947849c864a49a8f35e9e1f551246d95a7ed420ac0"} Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.666116 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpccf" event={"ID":"1bc42a63-9035-4803-98fb-ce63eef24511","Type":"ContainerStarted","Data":"364a2d83d50e1aa77c09d2a4a765b9e155119be8e5fdb7f16f4182a8f0942506"} Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.673943 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"565f3763-d271-4a37-93a4-e17d54bfe62c","Type":"ContainerStarted","Data":"77f6ad5dfa89a5d60b582b3378e87c44a96ab31cf63804b61f59dcaa3da1fff2"} Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.679205 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.682672 5120 generic.go:358] "Generic (PLEG): container finished" podID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerID="d87352da48a537efd746ac0a42c6ea4d532c289edb06f4ac00a2aa078875a6bd" exitCode=0 Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.682803 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2fzs" event={"ID":"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02","Type":"ContainerDied","Data":"d87352da48a537efd746ac0a42c6ea4d532c289edb06f4ac00a2aa078875a6bd"} Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.682874 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2fzs" event={"ID":"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02","Type":"ContainerStarted","Data":"43c426d272479e2be31914e1fafa02647d938951bcdf7a66d259bceb5d3afaac"} Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.686972 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-v9567" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.689657 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.689869 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-catalog-content\") pod \"redhat-operators-58qrd\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.690044 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqf7g\" (UniqueName: \"kubernetes.io/projected/1d89b60e-d6e7-47df-898a-199387c5b767-kube-api-access-xqf7g\") pod \"redhat-operators-58qrd\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.690198 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-utilities\") pod \"redhat-operators-58qrd\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.690542 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.190511862 +0000 UTC m=+126.444815193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.700947 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=3.700930965 podStartE2EDuration="3.700930965s" podCreationTimestamp="2025-12-11 16:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:56.698865163 +0000 UTC m=+125.953168484" watchObservedRunningTime="2025-12-11 16:02:56.700930965 +0000 UTC m=+125.955234286" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.793261 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.793313 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xqf7g\" (UniqueName: \"kubernetes.io/projected/1d89b60e-d6e7-47df-898a-199387c5b767-kube-api-access-xqf7g\") pod \"redhat-operators-58qrd\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.793413 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-utilities\") pod \"redhat-operators-58qrd\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.793475 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-catalog-content\") pod \"redhat-operators-58qrd\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.796064 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-utilities\") pod \"redhat-operators-58qrd\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.796615 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.296598948 +0000 UTC m=+126.550902279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.796858 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-catalog-content\") pod \"redhat-operators-58qrd\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.832477 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqf7g\" (UniqueName: \"kubernetes.io/projected/1d89b60e-d6e7-47df-898a-199387c5b767-kube-api-access-xqf7g\") pod \"redhat-operators-58qrd\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.895967 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.896368 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.396352545 +0000 UTC m=+126.650655876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.910833 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.967542 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-96ltm"] Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.968104 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a623919a-d893-4f53-9538-2dc253a63989" containerName="collect-profiles" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.968121 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a623919a-d893-4f53-9538-2dc253a63989" containerName="collect-profiles" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.968234 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="a623919a-d893-4f53-9538-2dc253a63989" containerName="collect-profiles" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.971332 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.977296 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96ltm"] Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.996686 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a623919a-d893-4f53-9538-2dc253a63989-config-volume\") pod \"a623919a-d893-4f53-9538-2dc253a63989\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.996755 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a623919a-d893-4f53-9538-2dc253a63989-secret-volume\") pod \"a623919a-d893-4f53-9538-2dc253a63989\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.996965 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lggkz\" (UniqueName: \"kubernetes.io/projected/a623919a-d893-4f53-9538-2dc253a63989-kube-api-access-lggkz\") pod \"a623919a-d893-4f53-9538-2dc253a63989\" (UID: \"a623919a-d893-4f53-9538-2dc253a63989\") " Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.997276 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:56 crc kubenswrapper[5120]: E1211 16:02:56.997858 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.497838025 +0000 UTC m=+126.752141356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:56 crc kubenswrapper[5120]: I1211 16:02:56.998845 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a623919a-d893-4f53-9538-2dc253a63989-config-volume" (OuterVolumeSpecName: "config-volume") pod "a623919a-d893-4f53-9538-2dc253a63989" (UID: "a623919a-d893-4f53-9538-2dc253a63989"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.003049 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a623919a-d893-4f53-9538-2dc253a63989-kube-api-access-lggkz" (OuterVolumeSpecName: "kube-api-access-lggkz") pod "a623919a-d893-4f53-9538-2dc253a63989" (UID: "a623919a-d893-4f53-9538-2dc253a63989"). InnerVolumeSpecName "kube-api-access-lggkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.003241 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a623919a-d893-4f53-9538-2dc253a63989-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a623919a-d893-4f53-9538-2dc253a63989" (UID: "a623919a-d893-4f53-9538-2dc253a63989"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.034700 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.098888 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.099138 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-utilities\") pod \"redhat-operators-96ltm\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.099237 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-catalog-content\") pod \"redhat-operators-96ltm\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.099286 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wq5m\" (UniqueName: \"kubernetes.io/projected/10b5afee-6cb5-425d-af1b-b44a204542f3-kube-api-access-7wq5m\") pod \"redhat-operators-96ltm\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.099349 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a623919a-d893-4f53-9538-2dc253a63989-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.099359 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a623919a-d893-4f53-9538-2dc253a63989-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.099368 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lggkz\" (UniqueName: \"kubernetes.io/projected/a623919a-d893-4f53-9538-2dc253a63989-kube-api-access-lggkz\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:57 crc kubenswrapper[5120]: E1211 16:02:57.099444 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.599428327 +0000 UTC m=+126.853731658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.162269 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.202228 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.202265 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-catalog-content\") pod \"redhat-operators-96ltm\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.202307 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7wq5m\" (UniqueName: \"kubernetes.io/projected/10b5afee-6cb5-425d-af1b-b44a204542f3-kube-api-access-7wq5m\") pod \"redhat-operators-96ltm\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.202423 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-utilities\") pod \"redhat-operators-96ltm\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: E1211 16:02:57.202532 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.702520838 +0000 UTC m=+126.956824169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.202904 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-utilities\") pod \"redhat-operators-96ltm\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.204466 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-catalog-content\") pod \"redhat-operators-96ltm\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.234170 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wq5m\" (UniqueName: \"kubernetes.io/projected/10b5afee-6cb5-425d-af1b-b44a204542f3-kube-api-access-7wq5m\") pod \"redhat-operators-96ltm\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.285143 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58qrd"] Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.292995 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:02:57 crc kubenswrapper[5120]: W1211 16:02:57.302040 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d89b60e_d6e7_47df_898a_199387c5b767.slice/crio-36ed1aa715057a36f0cb02f23ba2f6853493e908e4b461793eb6b6d999d6a96f WatchSource:0}: Error finding container 36ed1aa715057a36f0cb02f23ba2f6853493e908e4b461793eb6b6d999d6a96f: Status 404 returned error can't find the container with id 36ed1aa715057a36f0cb02f23ba2f6853493e908e4b461793eb6b6d999d6a96f Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.303428 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:57 crc kubenswrapper[5120]: E1211 16:02:57.303546 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.803522795 +0000 UTC m=+127.057826126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.303903 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:57 crc kubenswrapper[5120]: E1211 16:02:57.304201 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-11 16:02:57.804193442 +0000 UTC m=+127.058496773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-s2npb" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.330254 5120 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-11T16:02:56.643319641Z","UUID":"1352abf4-2f67-4e5a-a43e-604e1bbd6fd0","Handler":null,"Name":"","Endpoint":""} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.367925 5120 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.367967 5120 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.405104 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.419662 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.508509 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.511097 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96ltm"] Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.512503 5120 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.512642 5120 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.548574 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-s2npb\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.686968 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.691280 5120 generic.go:358] "Generic (PLEG): container finished" podID="565f3763-d271-4a37-93a4-e17d54bfe62c" containerID="77f6ad5dfa89a5d60b582b3378e87c44a96ab31cf63804b61f59dcaa3da1fff2" exitCode=0 Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.691379 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"565f3763-d271-4a37-93a4-e17d54bfe62c","Type":"ContainerDied","Data":"77f6ad5dfa89a5d60b582b3378e87c44a96ab31cf63804b61f59dcaa3da1fff2"} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.693446 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"776e23be-93f3-4d06-b329-70b17a179ad4","Type":"ContainerStarted","Data":"852a41a3c6e374a7bba444c2a66b641ba57fbbb2cb045526f3c6acc572dd1d0e"} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.693470 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"776e23be-93f3-4d06-b329-70b17a179ad4","Type":"ContainerStarted","Data":"b37c1c398618d373301da161a534549ed2dc2085d0212023bbf4c99707cfec2c"} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.695553 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.696159 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.696168 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424480-4wgwc" event={"ID":"a623919a-d893-4f53-9538-2dc253a63989","Type":"ContainerDied","Data":"68a5c33570cda3f3e50bf95627fff552e7e5279803368de02eab51205635f000"} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.696209 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a5c33570cda3f3e50bf95627fff552e7e5279803368de02eab51205635f000" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.698124 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96ltm" event={"ID":"10b5afee-6cb5-425d-af1b-b44a204542f3","Type":"ContainerStarted","Data":"7d6a31055b8a4767328faa9e52d1661dbfb6949fa6b171144f30c950f5e90e85"} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.700920 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z42wj" event={"ID":"5d5549cf-9120-4619-8794-574e335d251b","Type":"ContainerStarted","Data":"dbf03975ac8d89c2229f59c1c4078a4028da51035e639762d60d8e9aa70d2201"} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.700949 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z42wj" event={"ID":"5d5549cf-9120-4619-8794-574e335d251b","Type":"ContainerStarted","Data":"4a72bb5eace7f3fc56392a1a912b3615caae24ea7cd0b56ca56d926d11218a82"} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.703604 5120 generic.go:358] "Generic (PLEG): container finished" podID="1d89b60e-d6e7-47df-898a-199387c5b767" containerID="ad9ab087692d4f2727617f8ecd68b3b20b0d51e60feb15fe877c25947edeb1de" exitCode=0 Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.704798 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58qrd" event={"ID":"1d89b60e-d6e7-47df-898a-199387c5b767","Type":"ContainerDied","Data":"ad9ab087692d4f2727617f8ecd68b3b20b0d51e60feb15fe877c25947edeb1de"} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.704825 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58qrd" event={"ID":"1d89b60e-d6e7-47df-898a-199387c5b767","Type":"ContainerStarted","Data":"36ed1aa715057a36f0cb02f23ba2f6853493e908e4b461793eb6b6d999d6a96f"} Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.790987 5120 ???:1] "http: TLS handshake error from 192.168.126.11:50346: no serving certificate available for the kubelet" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.802836 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-z42wj" podStartSLOduration=15.80281607 podStartE2EDuration="15.80281607s" podCreationTimestamp="2025-12-11 16:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:57.800122473 +0000 UTC m=+127.054425814" watchObservedRunningTime="2025-12-11 16:02:57.80281607 +0000 UTC m=+127.057119411" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.826796 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.826781515 podStartE2EDuration="2.826781515s" podCreationTimestamp="2025-12-11 16:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:57.823915433 +0000 UTC m=+127.078218764" watchObservedRunningTime="2025-12-11 16:02:57.826781515 +0000 UTC m=+127.081084846" Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.957806 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-8rdg7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 11 16:02:57 crc kubenswrapper[5120]: I1211 16:02:57.957867 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-8rdg7" podUID="726e6606-5b45-4bba-865a-f581e8f6c218" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.103839 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s2npb"] Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.166582 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.168012 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.170660 5120 patch_prober.go:28] interesting pod/console-64d44f6ddf-wvrn4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.170950 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-wvrn4" podUID="d3ba2b9b-a777-4c95-bdd8-3feda00275ef" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.711617 5120 generic.go:358] "Generic (PLEG): container finished" podID="776e23be-93f3-4d06-b329-70b17a179ad4" containerID="852a41a3c6e374a7bba444c2a66b641ba57fbbb2cb045526f3c6acc572dd1d0e" exitCode=0 Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.711707 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"776e23be-93f3-4d06-b329-70b17a179ad4","Type":"ContainerDied","Data":"852a41a3c6e374a7bba444c2a66b641ba57fbbb2cb045526f3c6acc572dd1d0e"} Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.713587 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" event={"ID":"e502d0f9-d2f5-433f-ad5c-5353c996ba0e","Type":"ContainerStarted","Data":"61abd2db50eb2726a6fd59f5066c2c069871cac00ee6884254875da3b0f9a032"} Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.713615 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" event={"ID":"e502d0f9-d2f5-433f-ad5c-5353c996ba0e","Type":"ContainerStarted","Data":"4fbe34ec792244e18e9f2e7be4734570c3a0ce36cc252aa14c88b45ea00f3808"} Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.713675 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.715841 5120 generic.go:358] "Generic (PLEG): container finished" podID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerID="a502e43d734b6a019178c53700fb455ea7609bab3b0ffcb05696b7ec1be2a94f" exitCode=0 Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.716268 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96ltm" event={"ID":"10b5afee-6cb5-425d-af1b-b44a204542f3","Type":"ContainerDied","Data":"a502e43d734b6a019178c53700fb455ea7609bab3b0ffcb05696b7ec1be2a94f"} Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.776666 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" podStartSLOduration=109.776649147 podStartE2EDuration="1m49.776649147s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:02:58.776431381 +0000 UTC m=+128.030734722" watchObservedRunningTime="2025-12-11 16:02:58.776649147 +0000 UTC m=+128.030952478" Dec 11 16:02:58 crc kubenswrapper[5120]: I1211 16:02:58.993613 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.029415 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.150331 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565f3763-d271-4a37-93a4-e17d54bfe62c-kube-api-access\") pod \"565f3763-d271-4a37-93a4-e17d54bfe62c\" (UID: \"565f3763-d271-4a37-93a4-e17d54bfe62c\") " Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.150544 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565f3763-d271-4a37-93a4-e17d54bfe62c-kubelet-dir\") pod \"565f3763-d271-4a37-93a4-e17d54bfe62c\" (UID: \"565f3763-d271-4a37-93a4-e17d54bfe62c\") " Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.150780 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/565f3763-d271-4a37-93a4-e17d54bfe62c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "565f3763-d271-4a37-93a4-e17d54bfe62c" (UID: "565f3763-d271-4a37-93a4-e17d54bfe62c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.174546 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565f3763-d271-4a37-93a4-e17d54bfe62c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "565f3763-d271-4a37-93a4-e17d54bfe62c" (UID: "565f3763-d271-4a37-93a4-e17d54bfe62c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.252322 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/565f3763-d271-4a37-93a4-e17d54bfe62c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.252359 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/565f3763-d271-4a37-93a4-e17d54bfe62c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.725424 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"565f3763-d271-4a37-93a4-e17d54bfe62c","Type":"ContainerDied","Data":"30fa9dfe49ebd4f3a9fe8b66d9a850f7fbab18accd6c3a170991e204701e4566"} Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.725968 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30fa9dfe49ebd4f3a9fe8b66d9a850f7fbab18accd6c3a170991e204701e4566" Dec 11 16:02:59 crc kubenswrapper[5120]: I1211 16:02:59.725485 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.168559 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.168983 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.169088 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.169136 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.170376 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.171725 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.171809 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.181269 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.202022 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.202037 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.207646 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.270717 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.272622 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.297690 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1d42362-2047-47d8-b096-bd9f85606eeb-metrics-certs\") pod \"network-metrics-daemon-ccl9q\" (UID: \"f1d42362-2047-47d8-b096-bd9f85606eeb\") " pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.318032 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.454712 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.480040 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.494180 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.510769 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 11 16:03:00 crc kubenswrapper[5120]: I1211 16:03:00.519744 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ccl9q" Dec 11 16:03:01 crc kubenswrapper[5120]: E1211 16:03:01.433142 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:03:01 crc kubenswrapper[5120]: E1211 16:03:01.435445 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:03:01 crc kubenswrapper[5120]: E1211 16:03:01.437053 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:03:01 crc kubenswrapper[5120]: E1211 16:03:01.437138 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" podUID="ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 11 16:03:03 crc kubenswrapper[5120]: I1211 16:03:03.556853 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-8rdg7" Dec 11 16:03:03 crc kubenswrapper[5120]: I1211 16:03:03.891996 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5qdss" Dec 11 16:03:05 crc kubenswrapper[5120]: I1211 16:03:05.821786 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:03:05 crc kubenswrapper[5120]: I1211 16:03:05.948192 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776e23be-93f3-4d06-b329-70b17a179ad4-kube-api-access\") pod \"776e23be-93f3-4d06-b329-70b17a179ad4\" (UID: \"776e23be-93f3-4d06-b329-70b17a179ad4\") " Dec 11 16:03:05 crc kubenswrapper[5120]: I1211 16:03:05.949251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776e23be-93f3-4d06-b329-70b17a179ad4-kubelet-dir\") pod \"776e23be-93f3-4d06-b329-70b17a179ad4\" (UID: \"776e23be-93f3-4d06-b329-70b17a179ad4\") " Dec 11 16:03:05 crc kubenswrapper[5120]: I1211 16:03:05.949482 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/776e23be-93f3-4d06-b329-70b17a179ad4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "776e23be-93f3-4d06-b329-70b17a179ad4" (UID: "776e23be-93f3-4d06-b329-70b17a179ad4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:03:05 crc kubenswrapper[5120]: I1211 16:03:05.950426 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/776e23be-93f3-4d06-b329-70b17a179ad4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:05 crc kubenswrapper[5120]: I1211 16:03:05.958396 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/776e23be-93f3-4d06-b329-70b17a179ad4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "776e23be-93f3-4d06-b329-70b17a179ad4" (UID: "776e23be-93f3-4d06-b329-70b17a179ad4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:06 crc kubenswrapper[5120]: I1211 16:03:06.052231 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/776e23be-93f3-4d06-b329-70b17a179ad4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:06 crc kubenswrapper[5120]: I1211 16:03:06.687727 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:03:06 crc kubenswrapper[5120]: I1211 16:03:06.769948 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"776e23be-93f3-4d06-b329-70b17a179ad4","Type":"ContainerDied","Data":"b37c1c398618d373301da161a534549ed2dc2085d0212023bbf4c99707cfec2c"} Dec 11 16:03:06 crc kubenswrapper[5120]: I1211 16:03:06.769996 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b37c1c398618d373301da161a534549ed2dc2085d0212023bbf4c99707cfec2c" Dec 11 16:03:06 crc kubenswrapper[5120]: I1211 16:03:06.770109 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 11 16:03:08 crc kubenswrapper[5120]: I1211 16:03:08.063000 5120 ???:1] "http: TLS handshake error from 192.168.126.11:37132: no serving certificate available for the kubelet" Dec 11 16:03:08 crc kubenswrapper[5120]: I1211 16:03:08.173211 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:03:08 crc kubenswrapper[5120]: I1211 16:03:08.182493 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-wvrn4" Dec 11 16:03:08 crc kubenswrapper[5120]: I1211 16:03:08.220320 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9jr6t"] Dec 11 16:03:08 crc kubenswrapper[5120]: I1211 16:03:08.220626 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" podUID="5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" containerName="controller-manager" containerID="cri-o://cb981134b091a0de58f54d5143a7c44ab8701e3219c07959b9ef3f927714ff7f" gracePeriod=30 Dec 11 16:03:08 crc kubenswrapper[5120]: I1211 16:03:08.229345 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2"] Dec 11 16:03:08 crc kubenswrapper[5120]: I1211 16:03:08.229659 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" podUID="90663dc8-366f-45cb-8db2-8360cdc28f74" containerName="route-controller-manager" containerID="cri-o://67f2451e868c6e43ffdff925eb1f02e405c2012369989db507653ffea6a60d53" gracePeriod=30 Dec 11 16:03:09 crc kubenswrapper[5120]: I1211 16:03:09.457747 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:03:11 crc kubenswrapper[5120]: E1211 16:03:11.434183 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:03:11 crc kubenswrapper[5120]: E1211 16:03:11.435873 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:03:11 crc kubenswrapper[5120]: E1211 16:03:11.436967 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:03:11 crc kubenswrapper[5120]: E1211 16:03:11.437003 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" podUID="ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 11 16:03:13 crc kubenswrapper[5120]: I1211 16:03:13.815553 5120 generic.go:358] "Generic (PLEG): container finished" podID="90663dc8-366f-45cb-8db2-8360cdc28f74" containerID="67f2451e868c6e43ffdff925eb1f02e405c2012369989db507653ffea6a60d53" exitCode=0 Dec 11 16:03:13 crc kubenswrapper[5120]: I1211 16:03:13.815621 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" event={"ID":"90663dc8-366f-45cb-8db2-8360cdc28f74","Type":"ContainerDied","Data":"67f2451e868c6e43ffdff925eb1f02e405c2012369989db507653ffea6a60d53"} Dec 11 16:03:13 crc kubenswrapper[5120]: I1211 16:03:13.822596 5120 generic.go:358] "Generic (PLEG): container finished" podID="5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" containerID="cb981134b091a0de58f54d5143a7c44ab8701e3219c07959b9ef3f927714ff7f" exitCode=0 Dec 11 16:03:13 crc kubenswrapper[5120]: I1211 16:03:13.822672 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" event={"ID":"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346","Type":"ContainerDied","Data":"cb981134b091a0de58f54d5143a7c44ab8701e3219c07959b9ef3f927714ff7f"} Dec 11 16:03:16 crc kubenswrapper[5120]: I1211 16:03:16.487334 5120 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-9b4f2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 11 16:03:16 crc kubenswrapper[5120]: I1211 16:03:16.487420 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" podUID="90663dc8-366f-45cb-8db2-8360cdc28f74" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 11 16:03:16 crc kubenswrapper[5120]: I1211 16:03:16.491105 5120 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-9jr6t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 11 16:03:16 crc kubenswrapper[5120]: I1211 16:03:16.491191 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" podUID="5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 11 16:03:19 crc kubenswrapper[5120]: I1211 16:03:19.732582 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.359226 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.448173 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-config\") pod \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.448408 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-proxy-ca-bundles\") pod \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.448586 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx5xj\" (UniqueName: \"kubernetes.io/projected/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-kube-api-access-zx5xj\") pod \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.448719 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-serving-cert\") pod \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.448795 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-client-ca\") pod \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.448901 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-tmp\") pod \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\" (UID: \"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.449732 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-tmp" (OuterVolumeSpecName: "tmp") pod "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" (UID: "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.449892 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" (UID: "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.454658 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" (UID: "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.455049 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-config" (OuterVolumeSpecName: "config") pod "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" (UID: "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.461056 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" (UID: "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.465256 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d59777c6b-65rwb"] Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.467688 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="776e23be-93f3-4d06-b329-70b17a179ad4" containerName="pruner" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.467714 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="776e23be-93f3-4d06-b329-70b17a179ad4" containerName="pruner" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.467745 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" containerName="controller-manager" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.467750 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" containerName="controller-manager" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.467940 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="565f3763-d271-4a37-93a4-e17d54bfe62c" containerName="pruner" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.467954 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="565f3763-d271-4a37-93a4-e17d54bfe62c" containerName="pruner" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.468124 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="776e23be-93f3-4d06-b329-70b17a179ad4" containerName="pruner" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.468137 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" containerName="controller-manager" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.468161 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="565f3763-d271-4a37-93a4-e17d54bfe62c" containerName="pruner" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.473786 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-kube-api-access-zx5xj" (OuterVolumeSpecName: "kube-api-access-zx5xj") pod "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" (UID: "5ca3a5e5-2aab-4e5b-8756-2a725e8b3346"). InnerVolumeSpecName "kube-api-access-zx5xj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.475558 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d59777c6b-65rwb"] Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.475667 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.518723 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.550618 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-config\") pod \"90663dc8-366f-45cb-8db2-8360cdc28f74\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.550690 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnmmk\" (UniqueName: \"kubernetes.io/projected/90663dc8-366f-45cb-8db2-8360cdc28f74-kube-api-access-nnmmk\") pod \"90663dc8-366f-45cb-8db2-8360cdc28f74\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.550769 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90663dc8-366f-45cb-8db2-8360cdc28f74-serving-cert\") pod \"90663dc8-366f-45cb-8db2-8360cdc28f74\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.550809 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/90663dc8-366f-45cb-8db2-8360cdc28f74-tmp\") pod \"90663dc8-366f-45cb-8db2-8360cdc28f74\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.550855 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-client-ca\") pod \"90663dc8-366f-45cb-8db2-8360cdc28f74\" (UID: \"90663dc8-366f-45cb-8db2-8360cdc28f74\") " Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.550986 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-config\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551041 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hkvr\" (UniqueName: \"kubernetes.io/projected/07c76ad9-652a-48d6-ae97-b81575835d05-kube-api-access-9hkvr\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551074 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-client-ca\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551094 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c76ad9-652a-48d6-ae97-b81575835d05-serving-cert\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551142 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-proxy-ca-bundles\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551181 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c76ad9-652a-48d6-ae97-b81575835d05-tmp\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551231 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551241 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551250 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zx5xj\" (UniqueName: \"kubernetes.io/projected/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-kube-api-access-zx5xj\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551259 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551267 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.551275 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.552019 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90663dc8-366f-45cb-8db2-8360cdc28f74-tmp" (OuterVolumeSpecName: "tmp") pod "90663dc8-366f-45cb-8db2-8360cdc28f74" (UID: "90663dc8-366f-45cb-8db2-8360cdc28f74"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.552833 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-config" (OuterVolumeSpecName: "config") pod "90663dc8-366f-45cb-8db2-8360cdc28f74" (UID: "90663dc8-366f-45cb-8db2-8360cdc28f74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.553840 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-client-ca" (OuterVolumeSpecName: "client-ca") pod "90663dc8-366f-45cb-8db2-8360cdc28f74" (UID: "90663dc8-366f-45cb-8db2-8360cdc28f74"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.563276 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90663dc8-366f-45cb-8db2-8360cdc28f74-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "90663dc8-366f-45cb-8db2-8360cdc28f74" (UID: "90663dc8-366f-45cb-8db2-8360cdc28f74"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.564051 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq"] Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.564671 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90663dc8-366f-45cb-8db2-8360cdc28f74" containerName="route-controller-manager" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.564689 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="90663dc8-366f-45cb-8db2-8360cdc28f74" containerName="route-controller-manager" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.564792 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="90663dc8-366f-45cb-8db2-8360cdc28f74" containerName="route-controller-manager" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.571651 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90663dc8-366f-45cb-8db2-8360cdc28f74-kube-api-access-nnmmk" (OuterVolumeSpecName: "kube-api-access-nnmmk") pod "90663dc8-366f-45cb-8db2-8360cdc28f74" (UID: "90663dc8-366f-45cb-8db2-8360cdc28f74"). InnerVolumeSpecName "kube-api-access-nnmmk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.574859 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq"] Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.575170 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652274 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-config\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652530 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smw46\" (UniqueName: \"kubernetes.io/projected/2273f8ab-6d63-4118-bd7a-10b2e36551de-kube-api-access-smw46\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652570 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-client-ca\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652598 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2273f8ab-6d63-4118-bd7a-10b2e36551de-tmp\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652676 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9hkvr\" (UniqueName: \"kubernetes.io/projected/07c76ad9-652a-48d6-ae97-b81575835d05-kube-api-access-9hkvr\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652695 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-config\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652713 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2273f8ab-6d63-4118-bd7a-10b2e36551de-serving-cert\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652729 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-client-ca\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652751 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c76ad9-652a-48d6-ae97-b81575835d05-serving-cert\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652799 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-proxy-ca-bundles\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652820 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c76ad9-652a-48d6-ae97-b81575835d05-tmp\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652877 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652887 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nnmmk\" (UniqueName: \"kubernetes.io/projected/90663dc8-366f-45cb-8db2-8360cdc28f74-kube-api-access-nnmmk\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652896 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90663dc8-366f-45cb-8db2-8360cdc28f74-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652904 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/90663dc8-366f-45cb-8db2-8360cdc28f74-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.652912 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90663dc8-366f-45cb-8db2-8360cdc28f74-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.653801 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c76ad9-652a-48d6-ae97-b81575835d05-tmp\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.654078 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-client-ca\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.654556 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-config\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.654690 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-proxy-ca-bundles\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.657066 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c76ad9-652a-48d6-ae97-b81575835d05-serving-cert\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.671167 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hkvr\" (UniqueName: \"kubernetes.io/projected/07c76ad9-652a-48d6-ae97-b81575835d05-kube-api-access-9hkvr\") pod \"controller-manager-d59777c6b-65rwb\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.755122 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smw46\" (UniqueName: \"kubernetes.io/projected/2273f8ab-6d63-4118-bd7a-10b2e36551de-kube-api-access-smw46\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.755487 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-client-ca\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.755516 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2273f8ab-6d63-4118-bd7a-10b2e36551de-tmp\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.755540 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-config\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.755558 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2273f8ab-6d63-4118-bd7a-10b2e36551de-serving-cert\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.756354 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2273f8ab-6d63-4118-bd7a-10b2e36551de-tmp\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.757265 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-client-ca\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.757423 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-config\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.761814 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2273f8ab-6d63-4118-bd7a-10b2e36551de-serving-cert\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.779683 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smw46\" (UniqueName: \"kubernetes.io/projected/2273f8ab-6d63-4118-bd7a-10b2e36551de-kube-api-access-smw46\") pod \"route-controller-manager-7bf74b8778-rdgnq\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.814631 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.857337 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96ltm" event={"ID":"10b5afee-6cb5-425d-af1b-b44a204542f3","Type":"ContainerStarted","Data":"11aa75460eba12313c0be6a9e184e91c5784721826fd86c54122e63735859b70"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.859849 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2744" event={"ID":"4b9fbe5e-2046-431a-af21-9bfbbbecf32b","Type":"ContainerStarted","Data":"ebc91c938ebbb4b6b502fab428b4f6e305258a941c2cbe2811f5ddb77751fd55"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.864614 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpccf" event={"ID":"1bc42a63-9035-4803-98fb-ce63eef24511","Type":"ContainerStarted","Data":"e0ac1793b451a6b21df2e06c217feedf58722c7e342c653d85e8999c8f88e558"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.866866 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"86ed13b9ec922a269a9b52235161f2f5a7e8561d3b40de61d824ec4b9bf6a1f5"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.866893 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"c8613d8b7a3dc9d70e53024900a38ae496a742fedd70674bd5f87d795e1bd2be"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.870637 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.870876 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2" event={"ID":"90663dc8-366f-45cb-8db2-8360cdc28f74","Type":"ContainerDied","Data":"36188ffbc21defcd55e88ffbfb579f2a20a9fc83189088ba1a31aeb7efb12f53"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.870952 5120 scope.go:117] "RemoveContainer" containerID="67f2451e868c6e43ffdff925eb1f02e405c2012369989db507653ffea6a60d53" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.885070 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" event={"ID":"5ca3a5e5-2aab-4e5b-8756-2a725e8b3346","Type":"ContainerDied","Data":"8b9b22cd7d8a2bd677b00e24419e8e044902d9713c70902755632bf073ba3ef3"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.885214 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.891286 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58qrd" event={"ID":"1d89b60e-d6e7-47df-898a-199387c5b767","Type":"ContainerStarted","Data":"fec841aa96421753527f98ca620853400bbfb304425ea5a16b4991dab1374357"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.895162 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r58jk" event={"ID":"d4f9c834-beb5-42f0-895d-eca73b7897e0","Type":"ContainerStarted","Data":"ab8d504a39af181051af63cf896866b0fed7f080d71851c15a0b807218b69e71"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.903442 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2fzs" event={"ID":"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02","Type":"ContainerStarted","Data":"64aaeb12dacea3b86d543e74e13783b48421e006c2ce8b2031d78e20c350843b"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.911113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k644q" event={"ID":"24c0e236-bb3f-4b08-ba51-b0881c127d94","Type":"ContainerStarted","Data":"f0b354ac31941a7adb6e28bc78512d0621a40b8ddc9a5c8178f93c115b7d8213"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.928348 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n4" event={"ID":"6ca05d96-1ede-4860-abf0-dda71706ae45","Type":"ContainerStarted","Data":"cbb4839380cf310d249f475596ac3d4ed1730b7ed49170badb43b3799372b60d"} Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.940635 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:20 crc kubenswrapper[5120]: I1211 16:03:20.965133 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ccl9q"] Dec 11 16:03:21 crc kubenswrapper[5120]: W1211 16:03:21.086449 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-fb283053572371806d1a7972f4380588cd26d29016ee2117ae16afddac8caab8 WatchSource:0}: Error finding container fb283053572371806d1a7972f4380588cd26d29016ee2117ae16afddac8caab8: Status 404 returned error can't find the container with id fb283053572371806d1a7972f4380588cd26d29016ee2117ae16afddac8caab8 Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.086820 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d59777c6b-65rwb"] Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.167448 5120 scope.go:117] "RemoveContainer" containerID="cb981134b091a0de58f54d5143a7c44ab8701e3219c07959b9ef3f927714ff7f" Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.287109 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2"] Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.291776 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-9b4f2"] Dec 11 16:03:21 crc kubenswrapper[5120]: E1211 16:03:21.454909 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:03:21 crc kubenswrapper[5120]: E1211 16:03:21.457163 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:03:21 crc kubenswrapper[5120]: E1211 16:03:21.458211 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 16:03:21 crc kubenswrapper[5120]: E1211 16:03:21.458282 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" podUID="ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.564639 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq"] Dec 11 16:03:21 crc kubenswrapper[5120]: W1211 16:03:21.572888 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2273f8ab_6d63_4118_bd7a_10b2e36551de.slice/crio-ba4df1447ef40d62158285ea108ae70bd60698ec591d1c0e1f38e984d84680fd WatchSource:0}: Error finding container ba4df1447ef40d62158285ea108ae70bd60698ec591d1c0e1f38e984d84680fd: Status 404 returned error can't find the container with id ba4df1447ef40d62158285ea108ae70bd60698ec591d1c0e1f38e984d84680fd Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.948497 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ccl9q" event={"ID":"f1d42362-2047-47d8-b096-bd9f85606eeb","Type":"ContainerStarted","Data":"e42cedb3dab2a252ff8965ae6b0ab4e833740b40d7fb812054f5e1adabcf286c"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.948731 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ccl9q" event={"ID":"f1d42362-2047-47d8-b096-bd9f85606eeb","Type":"ContainerStarted","Data":"ecb8085dba4b433f295fcec6d7132ec077a0ea70b7cb9cbdab0a98882a0f9e30"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.953788 5120 generic.go:358] "Generic (PLEG): container finished" podID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerID="cbb4839380cf310d249f475596ac3d4ed1730b7ed49170badb43b3799372b60d" exitCode=0 Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.953895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n4" event={"ID":"6ca05d96-1ede-4860-abf0-dda71706ae45","Type":"ContainerDied","Data":"cbb4839380cf310d249f475596ac3d4ed1730b7ed49170badb43b3799372b60d"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.958040 5120 generic.go:358] "Generic (PLEG): container finished" podID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerID="11aa75460eba12313c0be6a9e184e91c5784721826fd86c54122e63735859b70" exitCode=0 Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.958122 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96ltm" event={"ID":"10b5afee-6cb5-425d-af1b-b44a204542f3","Type":"ContainerDied","Data":"11aa75460eba12313c0be6a9e184e91c5784721826fd86c54122e63735859b70"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.960313 5120 generic.go:358] "Generic (PLEG): container finished" podID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerID="ebc91c938ebbb4b6b502fab428b4f6e305258a941c2cbe2811f5ddb77751fd55" exitCode=0 Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.960361 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2744" event={"ID":"4b9fbe5e-2046-431a-af21-9bfbbbecf32b","Type":"ContainerDied","Data":"ebc91c938ebbb4b6b502fab428b4f6e305258a941c2cbe2811f5ddb77751fd55"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.964907 5120 generic.go:358] "Generic (PLEG): container finished" podID="1bc42a63-9035-4803-98fb-ce63eef24511" containerID="e0ac1793b451a6b21df2e06c217feedf58722c7e342c653d85e8999c8f88e558" exitCode=0 Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.964937 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpccf" event={"ID":"1bc42a63-9035-4803-98fb-ce63eef24511","Type":"ContainerDied","Data":"e0ac1793b451a6b21df2e06c217feedf58722c7e342c653d85e8999c8f88e558"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.981641 5120 generic.go:358] "Generic (PLEG): container finished" podID="1d89b60e-d6e7-47df-898a-199387c5b767" containerID="fec841aa96421753527f98ca620853400bbfb304425ea5a16b4991dab1374357" exitCode=0 Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.981711 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58qrd" event={"ID":"1d89b60e-d6e7-47df-898a-199387c5b767","Type":"ContainerDied","Data":"fec841aa96421753527f98ca620853400bbfb304425ea5a16b4991dab1374357"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.987055 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"cff40aa7b57781c5797cad0e0c32efc58d051b5f9b1d7cf7c2b01e7433157855"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.987106 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"9f46dbd398abc08a3687431d8ca0ed49e308de99537c38b5b434b4838d814623"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.987851 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.991956 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"6df6a9d21cfb992c5a9db03c4efe79fc7812645a2f6325a2429c168abdba3683"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.991994 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"fb283053572371806d1a7972f4380588cd26d29016ee2117ae16afddac8caab8"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.993205 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" event={"ID":"2273f8ab-6d63-4118-bd7a-10b2e36551de","Type":"ContainerStarted","Data":"0ebd4231b088e4b1eccc0f79284d456703ec244c3821d1a53a8a02246f68c9c3"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.993238 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" event={"ID":"2273f8ab-6d63-4118-bd7a-10b2e36551de","Type":"ContainerStarted","Data":"ba4df1447ef40d62158285ea108ae70bd60698ec591d1c0e1f38e984d84680fd"} Dec 11 16:03:21 crc kubenswrapper[5120]: I1211 16:03:21.993957 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.000169 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" event={"ID":"07c76ad9-652a-48d6-ae97-b81575835d05","Type":"ContainerStarted","Data":"6186720005d8cd386d46e72735eaa3ea85daf24da5c1dbe3cb42b58bb24415b6"} Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.000207 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" event={"ID":"07c76ad9-652a-48d6-ae97-b81575835d05","Type":"ContainerStarted","Data":"dbf0c75c897642d8e6b16e4c774c69aba41d59fad812b30bef2ce28034409739"} Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.000866 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.005531 5120 generic.go:358] "Generic (PLEG): container finished" podID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerID="ab8d504a39af181051af63cf896866b0fed7f080d71851c15a0b807218b69e71" exitCode=0 Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.005625 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r58jk" event={"ID":"d4f9c834-beb5-42f0-895d-eca73b7897e0","Type":"ContainerDied","Data":"ab8d504a39af181051af63cf896866b0fed7f080d71851c15a0b807218b69e71"} Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.113015 5120 generic.go:358] "Generic (PLEG): container finished" podID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerID="64aaeb12dacea3b86d543e74e13783b48421e006c2ce8b2031d78e20c350843b" exitCode=0 Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.113431 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2fzs" event={"ID":"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02","Type":"ContainerDied","Data":"64aaeb12dacea3b86d543e74e13783b48421e006c2ce8b2031d78e20c350843b"} Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.113461 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2fzs" event={"ID":"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02","Type":"ContainerStarted","Data":"7359f29e456d4f6dae4bc8a1c5b8a8372face338eb63dfcc88d11f89d667f035"} Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.130674 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" podStartSLOduration=14.130655824 podStartE2EDuration="14.130655824s" podCreationTimestamp="2025-12-11 16:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:03:22.128435241 +0000 UTC m=+151.382738582" watchObservedRunningTime="2025-12-11 16:03:22.130655824 +0000 UTC m=+151.384959155" Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.143687 5120 generic.go:358] "Generic (PLEG): container finished" podID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerID="f0b354ac31941a7adb6e28bc78512d0621a40b8ddc9a5c8178f93c115b7d8213" exitCode=0 Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.143738 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k644q" event={"ID":"24c0e236-bb3f-4b08-ba51-b0881c127d94","Type":"ContainerDied","Data":"f0b354ac31941a7adb6e28bc78512d0621a40b8ddc9a5c8178f93c115b7d8213"} Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.143764 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k644q" event={"ID":"24c0e236-bb3f-4b08-ba51-b0881c127d94","Type":"ContainerStarted","Data":"7699fa3e82c9493c9effbc8760e893407f1116b817b6bc2f8617b50c475935df"} Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.159128 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.218989 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" podStartSLOduration=14.218971979 podStartE2EDuration="14.218971979s" podCreationTimestamp="2025-12-11 16:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:03:22.217743474 +0000 UTC m=+151.472046815" watchObservedRunningTime="2025-12-11 16:03:22.218971979 +0000 UTC m=+151.473275310" Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.244749 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.246107 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k644q" podStartSLOduration=4.492505244 podStartE2EDuration="29.246092375s" podCreationTimestamp="2025-12-11 16:02:53 +0000 UTC" firstStartedPulling="2025-12-11 16:02:55.596992996 +0000 UTC m=+124.851296327" lastFinishedPulling="2025-12-11 16:03:20.350580127 +0000 UTC m=+149.604883458" observedRunningTime="2025-12-11 16:03:22.243627655 +0000 UTC m=+151.497930986" watchObservedRunningTime="2025-12-11 16:03:22.246092375 +0000 UTC m=+151.500395706" Dec 11 16:03:22 crc kubenswrapper[5120]: I1211 16:03:22.262042 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l2fzs" podStartSLOduration=4.620951464 podStartE2EDuration="28.262026485s" podCreationTimestamp="2025-12-11 16:02:54 +0000 UTC" firstStartedPulling="2025-12-11 16:02:56.684685375 +0000 UTC m=+125.938988706" lastFinishedPulling="2025-12-11 16:03:20.325760396 +0000 UTC m=+149.580063727" observedRunningTime="2025-12-11 16:03:22.261811619 +0000 UTC m=+151.516114950" watchObservedRunningTime="2025-12-11 16:03:22.262026485 +0000 UTC m=+151.516329816" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.029440 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90663dc8-366f-45cb-8db2-8360cdc28f74" path="/var/lib/kubelet/pods/90663dc8-366f-45cb-8db2-8360cdc28f74/volumes" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.149998 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ccl9q" event={"ID":"f1d42362-2047-47d8-b096-bd9f85606eeb","Type":"ContainerStarted","Data":"fc848fd8092960da4e4dd2b5ad493db32ed9ff0075bad81fd35dc599f8b3edb5"} Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.152278 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n4" event={"ID":"6ca05d96-1ede-4860-abf0-dda71706ae45","Type":"ContainerStarted","Data":"811c6c11294d25f9f12404e73ed117bbc3cea27e577cc4231412ac5cac3cb4e9"} Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.154331 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96ltm" event={"ID":"10b5afee-6cb5-425d-af1b-b44a204542f3","Type":"ContainerStarted","Data":"c34cc1408a88662250890eadd2df396c9641cf0e24c4683e6cbf73ad520116ce"} Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.156125 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2744" event={"ID":"4b9fbe5e-2046-431a-af21-9bfbbbecf32b","Type":"ContainerStarted","Data":"46f0ff1ecb1b24ef93c6f8733a8df6816929703c575f63c17636985b942ce10f"} Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.162576 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpccf" event={"ID":"1bc42a63-9035-4803-98fb-ce63eef24511","Type":"ContainerStarted","Data":"f2e05050d958b9c797a26d2bf9f6a3ad48f736f12b394cb5a291ea7a4c2911e5"} Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.164865 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58qrd" event={"ID":"1d89b60e-d6e7-47df-898a-199387c5b767","Type":"ContainerStarted","Data":"fc42eadd4bcefe72543e0015553167e6ad14c08692c134e9b0224e7f01036aea"} Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.169048 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ccl9q" podStartSLOduration=134.169036102 podStartE2EDuration="2m14.169036102s" podCreationTimestamp="2025-12-11 16:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:03:23.165921354 +0000 UTC m=+152.420224685" watchObservedRunningTime="2025-12-11 16:03:23.169036102 +0000 UTC m=+152.423339433" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.171005 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r58jk" event={"ID":"d4f9c834-beb5-42f0-895d-eca73b7897e0","Type":"ContainerStarted","Data":"fd2b788572c5a601df73d68e78a10530402dc0bba1f973261492aa3552afbbd0"} Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.186931 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c2744" podStartSLOduration=6.365023026 podStartE2EDuration="31.186909186s" podCreationTimestamp="2025-12-11 16:02:52 +0000 UTC" firstStartedPulling="2025-12-11 16:02:55.583856175 +0000 UTC m=+124.838159506" lastFinishedPulling="2025-12-11 16:03:20.405742345 +0000 UTC m=+149.660045666" observedRunningTime="2025-12-11 16:03:23.182847592 +0000 UTC m=+152.437150943" watchObservedRunningTime="2025-12-11 16:03:23.186909186 +0000 UTC m=+152.441212517" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.211632 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-96ltm" podStartSLOduration=5.402198692 podStartE2EDuration="27.211613004s" podCreationTimestamp="2025-12-11 16:02:56 +0000 UTC" firstStartedPulling="2025-12-11 16:02:58.717470084 +0000 UTC m=+127.971773415" lastFinishedPulling="2025-12-11 16:03:20.526884396 +0000 UTC m=+149.781187727" observedRunningTime="2025-12-11 16:03:23.207731375 +0000 UTC m=+152.462034716" watchObservedRunningTime="2025-12-11 16:03:23.211613004 +0000 UTC m=+152.465916335" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.233065 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rj8n4" podStartSLOduration=5.45880691 podStartE2EDuration="30.23305178s" podCreationTimestamp="2025-12-11 16:02:53 +0000 UTC" firstStartedPulling="2025-12-11 16:02:55.635506028 +0000 UTC m=+124.889809359" lastFinishedPulling="2025-12-11 16:03:20.409750898 +0000 UTC m=+149.664054229" observedRunningTime="2025-12-11 16:03:23.229255392 +0000 UTC m=+152.483558723" watchObservedRunningTime="2025-12-11 16:03:23.23305178 +0000 UTC m=+152.487355111" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.248825 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mpccf" podStartSLOduration=4.506518649 podStartE2EDuration="28.248812315s" podCreationTimestamp="2025-12-11 16:02:55 +0000 UTC" firstStartedPulling="2025-12-11 16:02:56.66703902 +0000 UTC m=+125.921342351" lastFinishedPulling="2025-12-11 16:03:20.409332696 +0000 UTC m=+149.663636017" observedRunningTime="2025-12-11 16:03:23.243440113 +0000 UTC m=+152.497743454" watchObservedRunningTime="2025-12-11 16:03:23.248812315 +0000 UTC m=+152.503115646" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.260137 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-58qrd" podStartSLOduration=4.555949406 podStartE2EDuration="27.260120494s" podCreationTimestamp="2025-12-11 16:02:56 +0000 UTC" firstStartedPulling="2025-12-11 16:02:57.704680355 +0000 UTC m=+126.958983676" lastFinishedPulling="2025-12-11 16:03:20.408851433 +0000 UTC m=+149.663154764" observedRunningTime="2025-12-11 16:03:23.259483026 +0000 UTC m=+152.513786357" watchObservedRunningTime="2025-12-11 16:03:23.260120494 +0000 UTC m=+152.514423825" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.283441 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r58jk" podStartSLOduration=5.545330531 podStartE2EDuration="30.283422762s" podCreationTimestamp="2025-12-11 16:02:53 +0000 UTC" firstStartedPulling="2025-12-11 16:02:55.589673471 +0000 UTC m=+124.843976802" lastFinishedPulling="2025-12-11 16:03:20.327765702 +0000 UTC m=+149.582069033" observedRunningTime="2025-12-11 16:03:23.281883029 +0000 UTC m=+152.536186360" watchObservedRunningTime="2025-12-11 16:03:23.283422762 +0000 UTC m=+152.537726093" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.330224 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.330277 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.490705 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.491064 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.771859 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.771923 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:03:23 crc kubenswrapper[5120]: I1211 16:03:23.814239 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:03:24 crc kubenswrapper[5120]: I1211 16:03:24.042797 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:03:24 crc kubenswrapper[5120]: I1211 16:03:24.043693 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:03:24 crc kubenswrapper[5120]: I1211 16:03:24.495295 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-c2744" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerName="registry-server" probeResult="failure" output=< Dec 11 16:03:24 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Dec 11 16:03:24 crc kubenswrapper[5120]: > Dec 11 16:03:24 crc kubenswrapper[5120]: I1211 16:03:24.526360 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rj8n4" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerName="registry-server" probeResult="failure" output=< Dec 11 16:03:24 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Dec 11 16:03:24 crc kubenswrapper[5120]: > Dec 11 16:03:24 crc kubenswrapper[5120]: I1211 16:03:24.894873 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-8mr9f" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.076656 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r58jk" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerName="registry-server" probeResult="failure" output=< Dec 11 16:03:25 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Dec 11 16:03:25 crc kubenswrapper[5120]: > Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.182848 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-sgddk_ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c/kube-multus-additional-cni-plugins/0.log" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.183118 5120 generic.go:358] "Generic (PLEG): container finished" podID="ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" exitCode=137 Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.183916 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" event={"ID":"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c","Type":"ContainerDied","Data":"b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19"} Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.347864 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.347933 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.397644 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.545610 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-sgddk_ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c/kube-multus-additional-cni-plugins/0.log" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.545679 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.630239 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsqws\" (UniqueName: \"kubernetes.io/projected/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-kube-api-access-rsqws\") pod \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.630336 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-cni-sysctl-allowlist\") pod \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.630391 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-ready\") pod \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.630422 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-tuning-conf-dir\") pod \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\" (UID: \"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c\") " Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.630627 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" (UID: "ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.630958 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-ready" (OuterVolumeSpecName: "ready") pod "ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" (UID: "ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.631175 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" (UID: "ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.639321 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-kube-api-access-rsqws" (OuterVolumeSpecName: "kube-api-access-rsqws") pod "ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" (UID: "ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c"). InnerVolumeSpecName "kube-api-access-rsqws". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.730191 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.730262 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.731892 5120 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.731935 5120 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-ready\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.731948 5120 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.731959 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rsqws\" (UniqueName: \"kubernetes.io/projected/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c-kube-api-access-rsqws\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:25 crc kubenswrapper[5120]: I1211 16:03:25.769820 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:03:26 crc kubenswrapper[5120]: I1211 16:03:26.190264 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-sgddk_ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c/kube-multus-additional-cni-plugins/0.log" Dec 11 16:03:26 crc kubenswrapper[5120]: I1211 16:03:26.190379 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" event={"ID":"ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c","Type":"ContainerDied","Data":"f5932be806d357f23644514fe9c7aaafef9ef57dafe48a5d6144bf96f0cd370a"} Dec 11 16:03:26 crc kubenswrapper[5120]: I1211 16:03:26.190443 5120 scope.go:117] "RemoveContainer" containerID="b9694e04edc1482d846aea56f4a8549e6e163687fb078d54c939afbe48100e19" Dec 11 16:03:26 crc kubenswrapper[5120]: I1211 16:03:26.190479 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-sgddk" Dec 11 16:03:26 crc kubenswrapper[5120]: I1211 16:03:26.219296 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-sgddk"] Dec 11 16:03:26 crc kubenswrapper[5120]: I1211 16:03:26.225424 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:03:26 crc kubenswrapper[5120]: I1211 16:03:26.228829 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-sgddk"] Dec 11 16:03:27 crc kubenswrapper[5120]: I1211 16:03:27.029850 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" path="/var/lib/kubelet/pods/ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c/volumes" Dec 11 16:03:27 crc kubenswrapper[5120]: I1211 16:03:27.035076 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:03:27 crc kubenswrapper[5120]: I1211 16:03:27.035275 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:03:27 crc kubenswrapper[5120]: I1211 16:03:27.294279 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:03:27 crc kubenswrapper[5120]: I1211 16:03:27.295868 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:03:28 crc kubenswrapper[5120]: I1211 16:03:28.075882 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-58qrd" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" containerName="registry-server" probeResult="failure" output=< Dec 11 16:03:28 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Dec 11 16:03:28 crc kubenswrapper[5120]: > Dec 11 16:03:28 crc kubenswrapper[5120]: I1211 16:03:28.242138 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d59777c6b-65rwb"] Dec 11 16:03:28 crc kubenswrapper[5120]: I1211 16:03:28.242394 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" podUID="07c76ad9-652a-48d6-ae97-b81575835d05" containerName="controller-manager" containerID="cri-o://6186720005d8cd386d46e72735eaa3ea85daf24da5c1dbe3cb42b58bb24415b6" gracePeriod=30 Dec 11 16:03:28 crc kubenswrapper[5120]: I1211 16:03:28.291487 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq"] Dec 11 16:03:28 crc kubenswrapper[5120]: I1211 16:03:28.291787 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" podUID="2273f8ab-6d63-4118-bd7a-10b2e36551de" containerName="route-controller-manager" containerID="cri-o://0ebd4231b088e4b1eccc0f79284d456703ec244c3821d1a53a8a02246f68c9c3" gracePeriod=30 Dec 11 16:03:28 crc kubenswrapper[5120]: I1211 16:03:28.333629 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-96ltm" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerName="registry-server" probeResult="failure" output=< Dec 11 16:03:28 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Dec 11 16:03:28 crc kubenswrapper[5120]: > Dec 11 16:03:28 crc kubenswrapper[5120]: I1211 16:03:28.564452 5120 ???:1] "http: TLS handshake error from 192.168.126.11:57308: no serving certificate available for the kubelet" Dec 11 16:03:29 crc kubenswrapper[5120]: I1211 16:03:29.208006 5120 generic.go:358] "Generic (PLEG): container finished" podID="2273f8ab-6d63-4118-bd7a-10b2e36551de" containerID="0ebd4231b088e4b1eccc0f79284d456703ec244c3821d1a53a8a02246f68c9c3" exitCode=0 Dec 11 16:03:29 crc kubenswrapper[5120]: I1211 16:03:29.208113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" event={"ID":"2273f8ab-6d63-4118-bd7a-10b2e36551de","Type":"ContainerDied","Data":"0ebd4231b088e4b1eccc0f79284d456703ec244c3821d1a53a8a02246f68c9c3"} Dec 11 16:03:29 crc kubenswrapper[5120]: I1211 16:03:29.209677 5120 generic.go:358] "Generic (PLEG): container finished" podID="07c76ad9-652a-48d6-ae97-b81575835d05" containerID="6186720005d8cd386d46e72735eaa3ea85daf24da5c1dbe3cb42b58bb24415b6" exitCode=0 Dec 11 16:03:29 crc kubenswrapper[5120]: I1211 16:03:29.209780 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" event={"ID":"07c76ad9-652a-48d6-ae97-b81575835d05","Type":"ContainerDied","Data":"6186720005d8cd386d46e72735eaa3ea85daf24da5c1dbe3cb42b58bb24415b6"} Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.146431 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.174110 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r"] Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.174854 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07c76ad9-652a-48d6-ae97-b81575835d05" containerName="controller-manager" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.174873 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c76ad9-652a-48d6-ae97-b81575835d05" containerName="controller-manager" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.174900 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" containerName="kube-multus-additional-cni-plugins" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.174908 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" containerName="kube-multus-additional-cni-plugins" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.176515 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec4c2325-0f17-4ce7-bc4a-3fd17cfb708c" containerName="kube-multus-additional-cni-plugins" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.176538 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="07c76ad9-652a-48d6-ae97-b81575835d05" containerName="controller-manager" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.304224 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-client-ca\") pod \"07c76ad9-652a-48d6-ae97-b81575835d05\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.304573 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c76ad9-652a-48d6-ae97-b81575835d05-tmp\") pod \"07c76ad9-652a-48d6-ae97-b81575835d05\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.304652 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-config\") pod \"07c76ad9-652a-48d6-ae97-b81575835d05\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.304749 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hkvr\" (UniqueName: \"kubernetes.io/projected/07c76ad9-652a-48d6-ae97-b81575835d05-kube-api-access-9hkvr\") pod \"07c76ad9-652a-48d6-ae97-b81575835d05\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.304796 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c76ad9-652a-48d6-ae97-b81575835d05-serving-cert\") pod \"07c76ad9-652a-48d6-ae97-b81575835d05\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.304818 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-proxy-ca-bundles\") pod \"07c76ad9-652a-48d6-ae97-b81575835d05\" (UID: \"07c76ad9-652a-48d6-ae97-b81575835d05\") " Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.304909 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c76ad9-652a-48d6-ae97-b81575835d05-tmp" (OuterVolumeSpecName: "tmp") pod "07c76ad9-652a-48d6-ae97-b81575835d05" (UID: "07c76ad9-652a-48d6-ae97-b81575835d05"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.305116 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07c76ad9-652a-48d6-ae97-b81575835d05-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.305323 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "07c76ad9-652a-48d6-ae97-b81575835d05" (UID: "07c76ad9-652a-48d6-ae97-b81575835d05"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.305361 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-client-ca" (OuterVolumeSpecName: "client-ca") pod "07c76ad9-652a-48d6-ae97-b81575835d05" (UID: "07c76ad9-652a-48d6-ae97-b81575835d05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.305411 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-config" (OuterVolumeSpecName: "config") pod "07c76ad9-652a-48d6-ae97-b81575835d05" (UID: "07c76ad9-652a-48d6-ae97-b81575835d05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.310355 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c76ad9-652a-48d6-ae97-b81575835d05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07c76ad9-652a-48d6-ae97-b81575835d05" (UID: "07c76ad9-652a-48d6-ae97-b81575835d05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.310395 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c76ad9-652a-48d6-ae97-b81575835d05-kube-api-access-9hkvr" (OuterVolumeSpecName: "kube-api-access-9hkvr") pod "07c76ad9-652a-48d6-ae97-b81575835d05" (UID: "07c76ad9-652a-48d6-ae97-b81575835d05"). InnerVolumeSpecName "kube-api-access-9hkvr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.406052 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9hkvr\" (UniqueName: \"kubernetes.io/projected/07c76ad9-652a-48d6-ae97-b81575835d05-kube-api-access-9hkvr\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.406100 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c76ad9-652a-48d6-ae97-b81575835d05-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.406109 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.406120 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:30 crc kubenswrapper[5120]: I1211 16:03:30.406128 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c76ad9-652a-48d6-ae97-b81575835d05-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.412303 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.450857 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" event={"ID":"07c76ad9-652a-48d6-ae97-b81575835d05","Type":"ContainerDied","Data":"dbf0c75c897642d8e6b16e4c774c69aba41d59fad812b30bef2ce28034409739"} Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.450942 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d59777c6b-65rwb" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.451035 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r"] Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.450948 5120 scope.go:117] "RemoveContainer" containerID="6186720005d8cd386d46e72735eaa3ea85daf24da5c1dbe3cb42b58bb24415b6" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.451386 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.455020 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.455337 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.455556 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.456585 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.456752 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.457142 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.464943 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.470183 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz"] Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.470972 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2273f8ab-6d63-4118-bd7a-10b2e36551de" containerName="route-controller-manager" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.470990 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2273f8ab-6d63-4118-bd7a-10b2e36551de" containerName="route-controller-manager" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.471271 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2273f8ab-6d63-4118-bd7a-10b2e36551de" containerName="route-controller-manager" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524402 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-config\") pod \"2273f8ab-6d63-4118-bd7a-10b2e36551de\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524464 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2273f8ab-6d63-4118-bd7a-10b2e36551de-tmp\") pod \"2273f8ab-6d63-4118-bd7a-10b2e36551de\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524485 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2273f8ab-6d63-4118-bd7a-10b2e36551de-serving-cert\") pod \"2273f8ab-6d63-4118-bd7a-10b2e36551de\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524507 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-client-ca\") pod \"2273f8ab-6d63-4118-bd7a-10b2e36551de\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524608 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smw46\" (UniqueName: \"kubernetes.io/projected/2273f8ab-6d63-4118-bd7a-10b2e36551de-kube-api-access-smw46\") pod \"2273f8ab-6d63-4118-bd7a-10b2e36551de\" (UID: \"2273f8ab-6d63-4118-bd7a-10b2e36551de\") " Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524771 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-config\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524811 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-client-ca\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524858 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e0db3c-8094-4ed2-b074-3d0d666070fa-serving-cert\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524881 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4vr\" (UniqueName: \"kubernetes.io/projected/22e0db3c-8094-4ed2-b074-3d0d666070fa-kube-api-access-kq4vr\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.524910 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-proxy-ca-bundles\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.525009 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22e0db3c-8094-4ed2-b074-3d0d666070fa-tmp\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.525420 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2273f8ab-6d63-4118-bd7a-10b2e36551de-tmp" (OuterVolumeSpecName: "tmp") pod "2273f8ab-6d63-4118-bd7a-10b2e36551de" (UID: "2273f8ab-6d63-4118-bd7a-10b2e36551de"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.525752 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-config" (OuterVolumeSpecName: "config") pod "2273f8ab-6d63-4118-bd7a-10b2e36551de" (UID: "2273f8ab-6d63-4118-bd7a-10b2e36551de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.526010 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-client-ca" (OuterVolumeSpecName: "client-ca") pod "2273f8ab-6d63-4118-bd7a-10b2e36551de" (UID: "2273f8ab-6d63-4118-bd7a-10b2e36551de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.529860 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2273f8ab-6d63-4118-bd7a-10b2e36551de-kube-api-access-smw46" (OuterVolumeSpecName: "kube-api-access-smw46") pod "2273f8ab-6d63-4118-bd7a-10b2e36551de" (UID: "2273f8ab-6d63-4118-bd7a-10b2e36551de"). InnerVolumeSpecName "kube-api-access-smw46". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.529993 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2273f8ab-6d63-4118-bd7a-10b2e36551de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2273f8ab-6d63-4118-bd7a-10b2e36551de" (UID: "2273f8ab-6d63-4118-bd7a-10b2e36551de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.626312 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e0db3c-8094-4ed2-b074-3d0d666070fa-serving-cert\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.626383 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4vr\" (UniqueName: \"kubernetes.io/projected/22e0db3c-8094-4ed2-b074-3d0d666070fa-kube-api-access-kq4vr\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.626422 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-proxy-ca-bundles\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.626484 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22e0db3c-8094-4ed2-b074-3d0d666070fa-tmp\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.626856 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-config\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.627300 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-client-ca\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.627618 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.627641 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2273f8ab-6d63-4118-bd7a-10b2e36551de-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.627650 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2273f8ab-6d63-4118-bd7a-10b2e36551de-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.627662 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2273f8ab-6d63-4118-bd7a-10b2e36551de-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.627674 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-smw46\" (UniqueName: \"kubernetes.io/projected/2273f8ab-6d63-4118-bd7a-10b2e36551de-kube-api-access-smw46\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.628226 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-proxy-ca-bundles\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.628251 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-client-ca\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.628969 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-config\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.629385 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22e0db3c-8094-4ed2-b074-3d0d666070fa-tmp\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.632027 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e0db3c-8094-4ed2-b074-3d0d666070fa-serving-cert\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.643053 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4vr\" (UniqueName: \"kubernetes.io/projected/22e0db3c-8094-4ed2-b074-3d0d666070fa-kube-api-access-kq4vr\") pod \"controller-manager-5b8cb6cc5c-q8s5r\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: I1211 16:03:31.781055 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:31 crc kubenswrapper[5120]: W1211 16:03:31.956476 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22e0db3c_8094_4ed2_b074_3d0d666070fa.slice/crio-36ed1e8edde611c04531904b7f11e9016e3bdb4818d89e002ce9a1e3506b657d WatchSource:0}: Error finding container 36ed1e8edde611c04531904b7f11e9016e3bdb4818d89e002ce9a1e3506b657d: Status 404 returned error can't find the container with id 36ed1e8edde611c04531904b7f11e9016e3bdb4818d89e002ce9a1e3506b657d Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.412288 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz"] Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.412576 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" event={"ID":"2273f8ab-6d63-4118-bd7a-10b2e36551de","Type":"ContainerDied","Data":"ba4df1447ef40d62158285ea108ae70bd60698ec591d1c0e1f38e984d84680fd"} Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.412610 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d59777c6b-65rwb"] Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.412627 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d59777c6b-65rwb"] Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.412644 5120 scope.go:117] "RemoveContainer" containerID="0ebd4231b088e4b1eccc0f79284d456703ec244c3821d1a53a8a02246f68c9c3" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.413937 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.414645 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq" Dec 11 16:03:32 crc kubenswrapper[5120]: E1211 16:03:32.519788 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2273f8ab_6d63_4118_bd7a_10b2e36551de.slice\": RecentStats: unable to find data in memory cache]" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.542242 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvnq\" (UniqueName: \"kubernetes.io/projected/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-kube-api-access-fjvnq\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.542313 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-tmp\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.542357 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-client-ca\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.542391 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-serving-cert\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.542450 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-config\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.618355 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.631292 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" event={"ID":"22e0db3c-8094-4ed2-b074-3d0d666070fa","Type":"ContainerStarted","Data":"36ed1e8edde611c04531904b7f11e9016e3bdb4818d89e002ce9a1e3506b657d"} Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.631330 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.631344 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r"] Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.631358 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq"] Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.631462 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bf74b8778-rdgnq"] Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.631499 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.633576 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.634752 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.643590 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-serving-cert\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.643675 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-config\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.643721 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjvnq\" (UniqueName: \"kubernetes.io/projected/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-kube-api-access-fjvnq\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.644045 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-tmp\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.644125 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-client-ca\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.644636 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-tmp\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.645537 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-client-ca\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.648306 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-config\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.656033 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-serving-cert\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.667342 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjvnq\" (UniqueName: \"kubernetes.io/projected/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-kube-api-access-fjvnq\") pod \"route-controller-manager-79fb5c4496-rz5cz\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.743676 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.745108 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de6d4b2e-552a-403d-be0f-378fe86e41d1-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"de6d4b2e-552a-403d-be0f-378fe86e41d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.745337 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de6d4b2e-552a-403d-be0f-378fe86e41d1-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"de6d4b2e-552a-403d-be0f-378fe86e41d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.846578 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de6d4b2e-552a-403d-be0f-378fe86e41d1-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"de6d4b2e-552a-403d-be0f-378fe86e41d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.846688 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de6d4b2e-552a-403d-be0f-378fe86e41d1-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"de6d4b2e-552a-403d-be0f-378fe86e41d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.846789 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de6d4b2e-552a-403d-be0f-378fe86e41d1-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"de6d4b2e-552a-403d-be0f-378fe86e41d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.868056 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de6d4b2e-552a-403d-be0f-378fe86e41d1-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"de6d4b2e-552a-403d-be0f-378fe86e41d1\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.921640 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz"] Dec 11 16:03:32 crc kubenswrapper[5120]: I1211 16:03:32.992628 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.036997 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c76ad9-652a-48d6-ae97-b81575835d05" path="/var/lib/kubelet/pods/07c76ad9-652a-48d6-ae97-b81575835d05/volumes" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.047647 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2273f8ab-6d63-4118-bd7a-10b2e36551de" path="/var/lib/kubelet/pods/2273f8ab-6d63-4118-bd7a-10b2e36551de/volumes" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.185174 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 11 16:03:33 crc kubenswrapper[5120]: W1211 16:03:33.192456 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podde6d4b2e_552a_403d_be0f_378fe86e41d1.slice/crio-eab685650edb559588fa55aa056e69d996713dd06ecf575ba0decedee90ae29f WatchSource:0}: Error finding container eab685650edb559588fa55aa056e69d996713dd06ecf575ba0decedee90ae29f: Status 404 returned error can't find the container with id eab685650edb559588fa55aa056e69d996713dd06ecf575ba0decedee90ae29f Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.246054 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" event={"ID":"22e0db3c-8094-4ed2-b074-3d0d666070fa","Type":"ContainerStarted","Data":"7edf61289f1853a92999ba4994c994647d0715b038221c32edab06f608aee5fb"} Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.247492 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.250584 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" event={"ID":"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd","Type":"ContainerStarted","Data":"a5f49e89569eb4489d28776bcffaf3c093e9a363c40e8ebde60ca062f5a7b0f3"} Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.250623 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" event={"ID":"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd","Type":"ContainerStarted","Data":"ef3e24db75f70eb9465b64c845549e0aa3b06f09bb1a4b711894e961688ac90a"} Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.251481 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.260852 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"de6d4b2e-552a-403d-be0f-378fe86e41d1","Type":"ContainerStarted","Data":"eab685650edb559588fa55aa056e69d996713dd06ecf575ba0decedee90ae29f"} Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.283670 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" podStartSLOduration=5.28365296 podStartE2EDuration="5.28365296s" podCreationTimestamp="2025-12-11 16:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:03:33.268441061 +0000 UTC m=+162.522744392" watchObservedRunningTime="2025-12-11 16:03:33.28365296 +0000 UTC m=+162.537956291" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.284750 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" podStartSLOduration=5.284742321 podStartE2EDuration="5.284742321s" podCreationTimestamp="2025-12-11 16:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:03:33.282022684 +0000 UTC m=+162.536326015" watchObservedRunningTime="2025-12-11 16:03:33.284742321 +0000 UTC m=+162.539045662" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.406550 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.454775 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.558856 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.567990 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.603926 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:03:33 crc kubenswrapper[5120]: I1211 16:03:33.637708 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:03:34 crc kubenswrapper[5120]: I1211 16:03:34.085209 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:03:34 crc kubenswrapper[5120]: I1211 16:03:34.124011 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:03:34 crc kubenswrapper[5120]: I1211 16:03:34.630931 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r58jk"] Dec 11 16:03:35 crc kubenswrapper[5120]: I1211 16:03:35.230560 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:03:35 crc kubenswrapper[5120]: I1211 16:03:35.351025 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"de6d4b2e-552a-403d-be0f-378fe86e41d1","Type":"ContainerStarted","Data":"82504e8ca950fda18d4be7fdc606c1372d79c5d3b9abecb1360a14bd2cb8235d"} Dec 11 16:03:35 crc kubenswrapper[5120]: I1211 16:03:35.351746 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r58jk" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerName="registry-server" containerID="cri-o://fd2b788572c5a601df73d68e78a10530402dc0bba1f973261492aa3552afbbd0" gracePeriod=2 Dec 11 16:03:35 crc kubenswrapper[5120]: I1211 16:03:35.365328 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=4.365308906 podStartE2EDuration="4.365308906s" podCreationTimestamp="2025-12-11 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:03:35.364815652 +0000 UTC m=+164.619118993" watchObservedRunningTime="2025-12-11 16:03:35.365308906 +0000 UTC m=+164.619612247" Dec 11 16:03:36 crc kubenswrapper[5120]: I1211 16:03:36.229689 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:03:36 crc kubenswrapper[5120]: I1211 16:03:36.357940 5120 generic.go:358] "Generic (PLEG): container finished" podID="de6d4b2e-552a-403d-be0f-378fe86e41d1" containerID="82504e8ca950fda18d4be7fdc606c1372d79c5d3b9abecb1360a14bd2cb8235d" exitCode=0 Dec 11 16:03:36 crc kubenswrapper[5120]: I1211 16:03:36.358027 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"de6d4b2e-552a-403d-be0f-378fe86e41d1","Type":"ContainerDied","Data":"82504e8ca950fda18d4be7fdc606c1372d79c5d3b9abecb1360a14bd2cb8235d"} Dec 11 16:03:36 crc kubenswrapper[5120]: I1211 16:03:36.361079 5120 generic.go:358] "Generic (PLEG): container finished" podID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerID="fd2b788572c5a601df73d68e78a10530402dc0bba1f973261492aa3552afbbd0" exitCode=0 Dec 11 16:03:36 crc kubenswrapper[5120]: I1211 16:03:36.361208 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r58jk" event={"ID":"d4f9c834-beb5-42f0-895d-eca73b7897e0","Type":"ContainerDied","Data":"fd2b788572c5a601df73d68e78a10530402dc0bba1f973261492aa3552afbbd0"} Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.097990 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.141292 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.166390 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.305534 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-utilities\") pod \"d4f9c834-beb5-42f0-895d-eca73b7897e0\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.305600 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2zqs\" (UniqueName: \"kubernetes.io/projected/d4f9c834-beb5-42f0-895d-eca73b7897e0-kube-api-access-m2zqs\") pod \"d4f9c834-beb5-42f0-895d-eca73b7897e0\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.305716 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-catalog-content\") pod \"d4f9c834-beb5-42f0-895d-eca73b7897e0\" (UID: \"d4f9c834-beb5-42f0-895d-eca73b7897e0\") " Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.307636 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-utilities" (OuterVolumeSpecName: "utilities") pod "d4f9c834-beb5-42f0-895d-eca73b7897e0" (UID: "d4f9c834-beb5-42f0-895d-eca73b7897e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.311909 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f9c834-beb5-42f0-895d-eca73b7897e0-kube-api-access-m2zqs" (OuterVolumeSpecName: "kube-api-access-m2zqs") pod "d4f9c834-beb5-42f0-895d-eca73b7897e0" (UID: "d4f9c834-beb5-42f0-895d-eca73b7897e0"). InnerVolumeSpecName "kube-api-access-m2zqs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.331391 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.370526 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r58jk" event={"ID":"d4f9c834-beb5-42f0-895d-eca73b7897e0","Type":"ContainerDied","Data":"8ed0da74d8c1af1cf5df68fd25e80676a77f7d9a3a5fc489b6836948571e5caf"} Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.370586 5120 scope.go:117] "RemoveContainer" containerID="fd2b788572c5a601df73d68e78a10530402dc0bba1f973261492aa3552afbbd0" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.370599 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r58jk" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.378543 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4f9c834-beb5-42f0-895d-eca73b7897e0" (UID: "d4f9c834-beb5-42f0-895d-eca73b7897e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.389393 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.403261 5120 scope.go:117] "RemoveContainer" containerID="ab8d504a39af181051af63cf896866b0fed7f080d71851c15a0b807218b69e71" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.407942 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.407974 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f9c834-beb5-42f0-895d-eca73b7897e0-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.407985 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m2zqs\" (UniqueName: \"kubernetes.io/projected/d4f9c834-beb5-42f0-895d-eca73b7897e0-kube-api-access-m2zqs\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.428796 5120 scope.go:117] "RemoveContainer" containerID="099b25d20224c0f01ae189ef7af69cea4acb5848938eff4cfc3bd413daea936b" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.561273 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.629944 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k644q"] Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.630212 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k644q" podUID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerName="registry-server" containerID="cri-o://7699fa3e82c9493c9effbc8760e893407f1116b817b6bc2f8617b50c475935df" gracePeriod=2 Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.701703 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r58jk"] Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.706700 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r58jk"] Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.715185 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de6d4b2e-552a-403d-be0f-378fe86e41d1-kube-api-access\") pod \"de6d4b2e-552a-403d-be0f-378fe86e41d1\" (UID: \"de6d4b2e-552a-403d-be0f-378fe86e41d1\") " Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.715234 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de6d4b2e-552a-403d-be0f-378fe86e41d1-kubelet-dir\") pod \"de6d4b2e-552a-403d-be0f-378fe86e41d1\" (UID: \"de6d4b2e-552a-403d-be0f-378fe86e41d1\") " Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.715488 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de6d4b2e-552a-403d-be0f-378fe86e41d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "de6d4b2e-552a-403d-be0f-378fe86e41d1" (UID: "de6d4b2e-552a-403d-be0f-378fe86e41d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.718988 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6d4b2e-552a-403d-be0f-378fe86e41d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "de6d4b2e-552a-403d-be0f-378fe86e41d1" (UID: "de6d4b2e-552a-403d-be0f-378fe86e41d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.816954 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de6d4b2e-552a-403d-be0f-378fe86e41d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:37 crc kubenswrapper[5120]: I1211 16:03:37.816993 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de6d4b2e-552a-403d-be0f-378fe86e41d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:38 crc kubenswrapper[5120]: I1211 16:03:38.376932 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"de6d4b2e-552a-403d-be0f-378fe86e41d1","Type":"ContainerDied","Data":"eab685650edb559588fa55aa056e69d996713dd06ecf575ba0decedee90ae29f"} Dec 11 16:03:38 crc kubenswrapper[5120]: I1211 16:03:38.376975 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab685650edb559588fa55aa056e69d996713dd06ecf575ba0decedee90ae29f" Dec 11 16:03:38 crc kubenswrapper[5120]: I1211 16:03:38.377055 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 11 16:03:38 crc kubenswrapper[5120]: I1211 16:03:38.633166 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpccf"] Dec 11 16:03:38 crc kubenswrapper[5120]: I1211 16:03:38.633873 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mpccf" podUID="1bc42a63-9035-4803-98fb-ce63eef24511" containerName="registry-server" containerID="cri-o://f2e05050d958b9c797a26d2bf9f6a3ad48f736f12b394cb5a291ea7a4c2911e5" gracePeriod=2 Dec 11 16:03:39 crc kubenswrapper[5120]: I1211 16:03:39.030093 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" path="/var/lib/kubelet/pods/d4f9c834-beb5-42f0-895d-eca73b7897e0/volumes" Dec 11 16:03:40 crc kubenswrapper[5120]: I1211 16:03:40.031673 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96ltm"] Dec 11 16:03:40 crc kubenswrapper[5120]: I1211 16:03:40.031988 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-96ltm" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerName="registry-server" containerID="cri-o://c34cc1408a88662250890eadd2df396c9641cf0e24c4683e6cbf73ad520116ce" gracePeriod=2 Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.399162 5120 generic.go:358] "Generic (PLEG): container finished" podID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerID="c34cc1408a88662250890eadd2df396c9641cf0e24c4683e6cbf73ad520116ce" exitCode=0 Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.399193 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96ltm" event={"ID":"10b5afee-6cb5-425d-af1b-b44a204542f3","Type":"ContainerDied","Data":"c34cc1408a88662250890eadd2df396c9641cf0e24c4683e6cbf73ad520116ce"} Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.400688 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mpccf_1bc42a63-9035-4803-98fb-ce63eef24511/registry-server/0.log" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.401660 5120 generic.go:358] "Generic (PLEG): container finished" podID="1bc42a63-9035-4803-98fb-ce63eef24511" containerID="f2e05050d958b9c797a26d2bf9f6a3ad48f736f12b394cb5a291ea7a4c2911e5" exitCode=137 Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.401758 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpccf" event={"ID":"1bc42a63-9035-4803-98fb-ce63eef24511","Type":"ContainerDied","Data":"f2e05050d958b9c797a26d2bf9f6a3ad48f736f12b394cb5a291ea7a4c2911e5"} Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.413473 5120 generic.go:358] "Generic (PLEG): container finished" podID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerID="7699fa3e82c9493c9effbc8760e893407f1116b817b6bc2f8617b50c475935df" exitCode=0 Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.413560 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k644q" event={"ID":"24c0e236-bb3f-4b08-ba51-b0881c127d94","Type":"ContainerDied","Data":"7699fa3e82c9493c9effbc8760e893407f1116b817b6bc2f8617b50c475935df"} Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.740472 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.820247 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-utilities\") pod \"24c0e236-bb3f-4b08-ba51-b0881c127d94\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.820380 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwjql\" (UniqueName: \"kubernetes.io/projected/24c0e236-bb3f-4b08-ba51-b0881c127d94-kube-api-access-vwjql\") pod \"24c0e236-bb3f-4b08-ba51-b0881c127d94\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.820499 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-catalog-content\") pod \"24c0e236-bb3f-4b08-ba51-b0881c127d94\" (UID: \"24c0e236-bb3f-4b08-ba51-b0881c127d94\") " Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.821571 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-utilities" (OuterVolumeSpecName: "utilities") pod "24c0e236-bb3f-4b08-ba51-b0881c127d94" (UID: "24c0e236-bb3f-4b08-ba51-b0881c127d94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.827945 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c0e236-bb3f-4b08-ba51-b0881c127d94-kube-api-access-vwjql" (OuterVolumeSpecName: "kube-api-access-vwjql") pod "24c0e236-bb3f-4b08-ba51-b0881c127d94" (UID: "24c0e236-bb3f-4b08-ba51-b0881c127d94"). InnerVolumeSpecName "kube-api-access-vwjql". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.832021 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mpccf_1bc42a63-9035-4803-98fb-ce63eef24511/registry-server/0.log" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.833034 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.837444 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.879084 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24c0e236-bb3f-4b08-ba51-b0881c127d94" (UID: "24c0e236-bb3f-4b08-ba51-b0881c127d94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.922209 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-catalog-content\") pod \"1bc42a63-9035-4803-98fb-ce63eef24511\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.922271 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-utilities\") pod \"1bc42a63-9035-4803-98fb-ce63eef24511\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.922324 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-utilities\") pod \"10b5afee-6cb5-425d-af1b-b44a204542f3\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.922341 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-catalog-content\") pod \"10b5afee-6cb5-425d-af1b-b44a204542f3\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.922391 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wq5m\" (UniqueName: \"kubernetes.io/projected/10b5afee-6cb5-425d-af1b-b44a204542f3-kube-api-access-7wq5m\") pod \"10b5afee-6cb5-425d-af1b-b44a204542f3\" (UID: \"10b5afee-6cb5-425d-af1b-b44a204542f3\") " Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.922438 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxhrr\" (UniqueName: \"kubernetes.io/projected/1bc42a63-9035-4803-98fb-ce63eef24511-kube-api-access-vxhrr\") pod \"1bc42a63-9035-4803-98fb-ce63eef24511\" (UID: \"1bc42a63-9035-4803-98fb-ce63eef24511\") " Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.922639 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.922650 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24c0e236-bb3f-4b08-ba51-b0881c127d94-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.922686 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwjql\" (UniqueName: \"kubernetes.io/projected/24c0e236-bb3f-4b08-ba51-b0881c127d94-kube-api-access-vwjql\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.924179 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-utilities" (OuterVolumeSpecName: "utilities") pod "10b5afee-6cb5-425d-af1b-b44a204542f3" (UID: "10b5afee-6cb5-425d-af1b-b44a204542f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.925131 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-utilities" (OuterVolumeSpecName: "utilities") pod "1bc42a63-9035-4803-98fb-ce63eef24511" (UID: "1bc42a63-9035-4803-98fb-ce63eef24511"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.925741 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc42a63-9035-4803-98fb-ce63eef24511-kube-api-access-vxhrr" (OuterVolumeSpecName: "kube-api-access-vxhrr") pod "1bc42a63-9035-4803-98fb-ce63eef24511" (UID: "1bc42a63-9035-4803-98fb-ce63eef24511"). InnerVolumeSpecName "kube-api-access-vxhrr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.927957 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b5afee-6cb5-425d-af1b-b44a204542f3-kube-api-access-7wq5m" (OuterVolumeSpecName: "kube-api-access-7wq5m") pod "10b5afee-6cb5-425d-af1b-b44a204542f3" (UID: "10b5afee-6cb5-425d-af1b-b44a204542f3"). InnerVolumeSpecName "kube-api-access-7wq5m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:41 crc kubenswrapper[5120]: I1211 16:03:41.946552 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bc42a63-9035-4803-98fb-ce63eef24511" (UID: "1bc42a63-9035-4803-98fb-ce63eef24511"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.017558 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10b5afee-6cb5-425d-af1b-b44a204542f3" (UID: "10b5afee-6cb5-425d-af1b-b44a204542f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.024212 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7wq5m\" (UniqueName: \"kubernetes.io/projected/10b5afee-6cb5-425d-af1b-b44a204542f3-kube-api-access-7wq5m\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.024240 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vxhrr\" (UniqueName: \"kubernetes.io/projected/1bc42a63-9035-4803-98fb-ce63eef24511-kube-api-access-vxhrr\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.024250 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.024259 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bc42a63-9035-4803-98fb-ce63eef24511-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.024267 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.024274 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b5afee-6cb5-425d-af1b-b44a204542f3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343355 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343880 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerName="extract-content" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343896 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerName="extract-content" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343911 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerName="extract-content" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343917 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerName="extract-content" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343927 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerName="extract-content" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343933 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerName="extract-content" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343941 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerName="extract-utilities" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343946 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerName="extract-utilities" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343954 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343959 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343968 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343973 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343982 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerName="extract-utilities" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343987 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerName="extract-utilities" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.343995 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de6d4b2e-552a-403d-be0f-378fe86e41d1" containerName="pruner" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344000 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6d4b2e-552a-403d-be0f-378fe86e41d1" containerName="pruner" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344009 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bc42a63-9035-4803-98fb-ce63eef24511" containerName="extract-utilities" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344014 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc42a63-9035-4803-98fb-ce63eef24511" containerName="extract-utilities" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344025 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerName="extract-utilities" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344030 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerName="extract-utilities" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344037 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344042 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344054 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bc42a63-9035-4803-98fb-ce63eef24511" containerName="extract-content" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344059 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc42a63-9035-4803-98fb-ce63eef24511" containerName="extract-content" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344066 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bc42a63-9035-4803-98fb-ce63eef24511" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344071 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc42a63-9035-4803-98fb-ce63eef24511" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344170 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="de6d4b2e-552a-403d-be0f-378fe86e41d1" containerName="pruner" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344182 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="24c0e236-bb3f-4b08-ba51-b0881c127d94" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344189 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d4f9c834-beb5-42f0-895d-eca73b7897e0" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344196 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1bc42a63-9035-4803-98fb-ce63eef24511" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.344205 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" containerName="registry-server" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.357989 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.358256 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.360085 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.360173 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.419094 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96ltm" event={"ID":"10b5afee-6cb5-425d-af1b-b44a204542f3","Type":"ContainerDied","Data":"7d6a31055b8a4767328faa9e52d1661dbfb6949fa6b171144f30c950f5e90e85"} Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.419114 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96ltm" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.419160 5120 scope.go:117] "RemoveContainer" containerID="c34cc1408a88662250890eadd2df396c9641cf0e24c4683e6cbf73ad520116ce" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.420800 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mpccf_1bc42a63-9035-4803-98fb-ce63eef24511/registry-server/0.log" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.421576 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpccf" event={"ID":"1bc42a63-9035-4803-98fb-ce63eef24511","Type":"ContainerDied","Data":"364a2d83d50e1aa77c09d2a4a765b9e155119be8e5fdb7f16f4182a8f0942506"} Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.421643 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpccf" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.424059 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k644q" event={"ID":"24c0e236-bb3f-4b08-ba51-b0881c127d94","Type":"ContainerDied","Data":"508c04b7bace26227fbe03026b7dd7d6625550c9d336df5daaef210f5962c11d"} Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.424105 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k644q" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.429343 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-var-lock\") pod \"installer-12-crc\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.429383 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kube-api-access\") pod \"installer-12-crc\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.429402 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kubelet-dir\") pod \"installer-12-crc\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.436215 5120 scope.go:117] "RemoveContainer" containerID="11aa75460eba12313c0be6a9e184e91c5784721826fd86c54122e63735859b70" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.459277 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpccf"] Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.461342 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpccf"] Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.465535 5120 scope.go:117] "RemoveContainer" containerID="a502e43d734b6a019178c53700fb455ea7609bab3b0ffcb05696b7ec1be2a94f" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.476218 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96ltm"] Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.484731 5120 scope.go:117] "RemoveContainer" containerID="f2e05050d958b9c797a26d2bf9f6a3ad48f736f12b394cb5a291ea7a4c2911e5" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.486849 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-96ltm"] Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.490813 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k644q"] Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.499841 5120 scope.go:117] "RemoveContainer" containerID="e0ac1793b451a6b21df2e06c217feedf58722c7e342c653d85e8999c8f88e558" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.500000 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k644q"] Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.512663 5120 scope.go:117] "RemoveContainer" containerID="112d5a9188a91972d01742947849c864a49a8f35e9e1f551246d95a7ed420ac0" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.530476 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-var-lock\") pod \"installer-12-crc\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.530516 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kube-api-access\") pod \"installer-12-crc\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.530539 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kubelet-dir\") pod \"installer-12-crc\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.530624 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-var-lock\") pod \"installer-12-crc\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.530642 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kubelet-dir\") pod \"installer-12-crc\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.535290 5120 scope.go:117] "RemoveContainer" containerID="7699fa3e82c9493c9effbc8760e893407f1116b817b6bc2f8617b50c475935df" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.547471 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kube-api-access\") pod \"installer-12-crc\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.551108 5120 scope.go:117] "RemoveContainer" containerID="f0b354ac31941a7adb6e28bc78512d0621a40b8ddc9a5c8178f93c115b7d8213" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.565989 5120 scope.go:117] "RemoveContainer" containerID="e98c25c78080ed151407b3d248f1f605d6365f08a0f8186c1ecbb4879f7e7bfc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.675112 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:03:42 crc kubenswrapper[5120]: I1211 16:03:42.874982 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 11 16:03:43 crc kubenswrapper[5120]: I1211 16:03:43.034478 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b5afee-6cb5-425d-af1b-b44a204542f3" path="/var/lib/kubelet/pods/10b5afee-6cb5-425d-af1b-b44a204542f3/volumes" Dec 11 16:03:43 crc kubenswrapper[5120]: I1211 16:03:43.035692 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc42a63-9035-4803-98fb-ce63eef24511" path="/var/lib/kubelet/pods/1bc42a63-9035-4803-98fb-ce63eef24511/volumes" Dec 11 16:03:43 crc kubenswrapper[5120]: I1211 16:03:43.036335 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24c0e236-bb3f-4b08-ba51-b0881c127d94" path="/var/lib/kubelet/pods/24c0e236-bb3f-4b08-ba51-b0881c127d94/volumes" Dec 11 16:03:43 crc kubenswrapper[5120]: I1211 16:03:43.440831 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"61cdc95b-8445-4386-87b3-a5a6c1ef5409","Type":"ContainerStarted","Data":"e285d1bb257a9071a1480e0c5a10a65fe338b11dbfec206177bd2059a862d066"} Dec 11 16:03:43 crc kubenswrapper[5120]: I1211 16:03:43.440877 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"61cdc95b-8445-4386-87b3-a5a6c1ef5409","Type":"ContainerStarted","Data":"9af363072b40cee21c1984ef46472cbc501d585365b1967bc3370f7399f64793"} Dec 11 16:03:43 crc kubenswrapper[5120]: I1211 16:03:43.464029 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=1.464011997 podStartE2EDuration="1.464011997s" podCreationTimestamp="2025-12-11 16:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:03:43.461603249 +0000 UTC m=+172.715906580" watchObservedRunningTime="2025-12-11 16:03:43.464011997 +0000 UTC m=+172.718315328" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.220832 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r"] Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.221869 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" podUID="22e0db3c-8094-4ed2-b074-3d0d666070fa" containerName="controller-manager" containerID="cri-o://7edf61289f1853a92999ba4994c994647d0715b038221c32edab06f608aee5fb" gracePeriod=30 Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.299915 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz"] Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.300224 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" podUID="bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" containerName="route-controller-manager" containerID="cri-o://a5f49e89569eb4489d28776bcffaf3c093e9a363c40e8ebde60ca062f5a7b0f3" gracePeriod=30 Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.471390 5120 generic.go:358] "Generic (PLEG): container finished" podID="22e0db3c-8094-4ed2-b074-3d0d666070fa" containerID="7edf61289f1853a92999ba4994c994647d0715b038221c32edab06f608aee5fb" exitCode=0 Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.471502 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" event={"ID":"22e0db3c-8094-4ed2-b074-3d0d666070fa","Type":"ContainerDied","Data":"7edf61289f1853a92999ba4994c994647d0715b038221c32edab06f608aee5fb"} Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.473220 5120 generic.go:358] "Generic (PLEG): container finished" podID="bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" containerID="a5f49e89569eb4489d28776bcffaf3c093e9a363c40e8ebde60ca062f5a7b0f3" exitCode=0 Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.473277 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" event={"ID":"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd","Type":"ContainerDied","Data":"a5f49e89569eb4489d28776bcffaf3c093e9a363c40e8ebde60ca062f5a7b0f3"} Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.762423 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.785970 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24"] Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.786532 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" containerName="route-controller-manager" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.786551 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" containerName="route-controller-manager" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.786660 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" containerName="route-controller-manager" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.789817 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.802551 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24"] Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.809384 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjvnq\" (UniqueName: \"kubernetes.io/projected/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-kube-api-access-fjvnq\") pod \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.809429 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-serving-cert\") pod \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.809515 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-tmp\") pod \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.809589 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-client-ca\") pod \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.809643 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-config\") pod \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\" (UID: \"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd\") " Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.810395 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-config" (OuterVolumeSpecName: "config") pod "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" (UID: "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.811129 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-tmp" (OuterVolumeSpecName: "tmp") pod "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" (UID: "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.811490 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-client-ca" (OuterVolumeSpecName: "client-ca") pod "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" (UID: "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.823714 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-kube-api-access-fjvnq" (OuterVolumeSpecName: "kube-api-access-fjvnq") pod "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" (UID: "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd"). InnerVolumeSpecName "kube-api-access-fjvnq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.866985 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" (UID: "bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911323 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2173574-1728-412b-a196-cda3303ddc5e-serving-cert\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911393 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfkgn\" (UniqueName: \"kubernetes.io/projected/c2173574-1728-412b-a196-cda3303ddc5e-kube-api-access-bfkgn\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911418 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-client-ca\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911442 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2173574-1728-412b-a196-cda3303ddc5e-tmp\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911482 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-config\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911522 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911533 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911541 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fjvnq\" (UniqueName: \"kubernetes.io/projected/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-kube-api-access-fjvnq\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911549 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:48 crc kubenswrapper[5120]: I1211 16:03:48.911557 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.012125 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-config\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.012203 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2173574-1728-412b-a196-cda3303ddc5e-serving-cert\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.012247 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bfkgn\" (UniqueName: \"kubernetes.io/projected/c2173574-1728-412b-a196-cda3303ddc5e-kube-api-access-bfkgn\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.012272 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-client-ca\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.012297 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2173574-1728-412b-a196-cda3303ddc5e-tmp\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.012779 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2173574-1728-412b-a196-cda3303ddc5e-tmp\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.013284 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-client-ca\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.013868 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-config\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.015903 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2173574-1728-412b-a196-cda3303ddc5e-serving-cert\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.031719 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.033564 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfkgn\" (UniqueName: \"kubernetes.io/projected/c2173574-1728-412b-a196-cda3303ddc5e-kube-api-access-bfkgn\") pod \"route-controller-manager-64ff8c7849-4mr24\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.055703 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78c449bf6-d25dp"] Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.056412 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22e0db3c-8094-4ed2-b074-3d0d666070fa" containerName="controller-manager" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.056433 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e0db3c-8094-4ed2-b074-3d0d666070fa" containerName="controller-manager" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.056583 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="22e0db3c-8094-4ed2-b074-3d0d666070fa" containerName="controller-manager" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.072627 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.087059 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78c449bf6-d25dp"] Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.113798 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22e0db3c-8094-4ed2-b074-3d0d666070fa-tmp\") pod \"22e0db3c-8094-4ed2-b074-3d0d666070fa\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.113924 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-config\") pod \"22e0db3c-8094-4ed2-b074-3d0d666070fa\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.113995 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-client-ca\") pod \"22e0db3c-8094-4ed2-b074-3d0d666070fa\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.114028 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq4vr\" (UniqueName: \"kubernetes.io/projected/22e0db3c-8094-4ed2-b074-3d0d666070fa-kube-api-access-kq4vr\") pod \"22e0db3c-8094-4ed2-b074-3d0d666070fa\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.114106 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-proxy-ca-bundles\") pod \"22e0db3c-8094-4ed2-b074-3d0d666070fa\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.114167 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e0db3c-8094-4ed2-b074-3d0d666070fa-serving-cert\") pod \"22e0db3c-8094-4ed2-b074-3d0d666070fa\" (UID: \"22e0db3c-8094-4ed2-b074-3d0d666070fa\") " Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.114354 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-client-ca\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.114446 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-config\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.114512 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5pq8\" (UniqueName: \"kubernetes.io/projected/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-kube-api-access-m5pq8\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.114557 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-serving-cert\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.114615 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-tmp\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.114642 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-proxy-ca-bundles\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.115526 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.115774 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-client-ca" (OuterVolumeSpecName: "client-ca") pod "22e0db3c-8094-4ed2-b074-3d0d666070fa" (UID: "22e0db3c-8094-4ed2-b074-3d0d666070fa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.115818 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-config" (OuterVolumeSpecName: "config") pod "22e0db3c-8094-4ed2-b074-3d0d666070fa" (UID: "22e0db3c-8094-4ed2-b074-3d0d666070fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.115860 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "22e0db3c-8094-4ed2-b074-3d0d666070fa" (UID: "22e0db3c-8094-4ed2-b074-3d0d666070fa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.116229 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22e0db3c-8094-4ed2-b074-3d0d666070fa-tmp" (OuterVolumeSpecName: "tmp") pod "22e0db3c-8094-4ed2-b074-3d0d666070fa" (UID: "22e0db3c-8094-4ed2-b074-3d0d666070fa"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.121390 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e0db3c-8094-4ed2-b074-3d0d666070fa-kube-api-access-kq4vr" (OuterVolumeSpecName: "kube-api-access-kq4vr") pod "22e0db3c-8094-4ed2-b074-3d0d666070fa" (UID: "22e0db3c-8094-4ed2-b074-3d0d666070fa"). InnerVolumeSpecName "kube-api-access-kq4vr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.121490 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e0db3c-8094-4ed2-b074-3d0d666070fa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "22e0db3c-8094-4ed2-b074-3d0d666070fa" (UID: "22e0db3c-8094-4ed2-b074-3d0d666070fa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.215911 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m5pq8\" (UniqueName: \"kubernetes.io/projected/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-kube-api-access-m5pq8\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.215979 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-serving-cert\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.216031 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-tmp\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.216052 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-proxy-ca-bundles\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.216088 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-client-ca\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.216137 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-config\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.216208 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.216223 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e0db3c-8094-4ed2-b074-3d0d666070fa-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.216237 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/22e0db3c-8094-4ed2-b074-3d0d666070fa-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.216249 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.216292 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e0db3c-8094-4ed2-b074-3d0d666070fa-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.217009 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-tmp\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.217236 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-proxy-ca-bundles\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.217286 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kq4vr\" (UniqueName: \"kubernetes.io/projected/22e0db3c-8094-4ed2-b074-3d0d666070fa-kube-api-access-kq4vr\") on node \"crc\" DevicePath \"\"" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.217405 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-client-ca\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.218458 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-config\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.225731 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-serving-cert\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.249447 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5pq8\" (UniqueName: \"kubernetes.io/projected/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-kube-api-access-m5pq8\") pod \"controller-manager-78c449bf6-d25dp\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.362045 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24"] Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.391892 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.481978 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" event={"ID":"c2173574-1728-412b-a196-cda3303ddc5e","Type":"ContainerStarted","Data":"18e934d1f7607e1bb5fc26f5b3f24fea9eb9637f409e62a1de2f09d8758f957a"} Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.497743 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" event={"ID":"22e0db3c-8094-4ed2-b074-3d0d666070fa","Type":"ContainerDied","Data":"36ed1e8edde611c04531904b7f11e9016e3bdb4818d89e002ce9a1e3506b657d"} Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.497763 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.497804 5120 scope.go:117] "RemoveContainer" containerID="7edf61289f1853a92999ba4994c994647d0715b038221c32edab06f608aee5fb" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.503934 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" event={"ID":"bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd","Type":"ContainerDied","Data":"ef3e24db75f70eb9465b64c845549e0aa3b06f09bb1a4b711894e961688ac90a"} Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.504103 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.535370 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz"] Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.535400 5120 scope.go:117] "RemoveContainer" containerID="a5f49e89569eb4489d28776bcffaf3c093e9a363c40e8ebde60ca062f5a7b0f3" Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.537951 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fb5c4496-rz5cz"] Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.548797 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r"] Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.553087 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b8cb6cc5c-q8s5r"] Dec 11 16:03:49 crc kubenswrapper[5120]: I1211 16:03:49.623660 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78c449bf6-d25dp"] Dec 11 16:03:49 crc kubenswrapper[5120]: W1211 16:03:49.631337 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5d71a09_c083_489f_bb0a_0df9a2cbca1a.slice/crio-9230fdc6f23b04776ebf764bd9a6917e671c121468a693a57f8a70b4294ec175 WatchSource:0}: Error finding container 9230fdc6f23b04776ebf764bd9a6917e671c121468a693a57f8a70b4294ec175: Status 404 returned error can't find the container with id 9230fdc6f23b04776ebf764bd9a6917e671c121468a693a57f8a70b4294ec175 Dec 11 16:03:50 crc kubenswrapper[5120]: I1211 16:03:50.512746 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" event={"ID":"c2173574-1728-412b-a196-cda3303ddc5e","Type":"ContainerStarted","Data":"55eb23d789ad04cbda7b8c2355b13b39d05eced6f99af26f0462b4b35c156aba"} Dec 11 16:03:50 crc kubenswrapper[5120]: I1211 16:03:50.512926 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:50 crc kubenswrapper[5120]: I1211 16:03:50.515038 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" event={"ID":"e5d71a09-c083-489f-bb0a-0df9a2cbca1a","Type":"ContainerStarted","Data":"0c6fc272c96dd82736071918cebab96752043092c44bede42ee3d2502fef272c"} Dec 11 16:03:50 crc kubenswrapper[5120]: I1211 16:03:50.515062 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" event={"ID":"e5d71a09-c083-489f-bb0a-0df9a2cbca1a","Type":"ContainerStarted","Data":"9230fdc6f23b04776ebf764bd9a6917e671c121468a693a57f8a70b4294ec175"} Dec 11 16:03:50 crc kubenswrapper[5120]: I1211 16:03:50.515605 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:50 crc kubenswrapper[5120]: I1211 16:03:50.518683 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:03:50 crc kubenswrapper[5120]: I1211 16:03:50.520839 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:03:50 crc kubenswrapper[5120]: I1211 16:03:50.533520 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" podStartSLOduration=2.533486958 podStartE2EDuration="2.533486958s" podCreationTimestamp="2025-12-11 16:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:03:50.528137037 +0000 UTC m=+179.782440368" watchObservedRunningTime="2025-12-11 16:03:50.533486958 +0000 UTC m=+179.787790329" Dec 11 16:03:50 crc kubenswrapper[5120]: I1211 16:03:50.549105 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" podStartSLOduration=2.549083639 podStartE2EDuration="2.549083639s" podCreationTimestamp="2025-12-11 16:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:03:50.545383604 +0000 UTC m=+179.799686965" watchObservedRunningTime="2025-12-11 16:03:50.549083639 +0000 UTC m=+179.803386990" Dec 11 16:03:51 crc kubenswrapper[5120]: I1211 16:03:51.042196 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e0db3c-8094-4ed2-b074-3d0d666070fa" path="/var/lib/kubelet/pods/22e0db3c-8094-4ed2-b074-3d0d666070fa/volumes" Dec 11 16:03:51 crc kubenswrapper[5120]: I1211 16:03:51.044198 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd" path="/var/lib/kubelet/pods/bc8bcc1b-f107-4688-a607-9b3b9d7cd9dd/volumes" Dec 11 16:03:51 crc kubenswrapper[5120]: I1211 16:03:51.133277 5120 pod_container_manager_linux.go:217] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod5ca3a5e5-2aab-4e5b-8756-2a725e8b3346"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod5ca3a5e5-2aab-4e5b-8756-2a725e8b3346] : Timed out while waiting for systemd to remove kubepods-burstable-pod5ca3a5e5_2aab_4e5b_8756_2a725e8b3346.slice" Dec 11 16:03:51 crc kubenswrapper[5120]: E1211 16:03:51.133451 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod5ca3a5e5-2aab-4e5b-8756-2a725e8b3346] : unable to destroy cgroup paths for cgroup [kubepods burstable pod5ca3a5e5-2aab-4e5b-8756-2a725e8b3346] : Timed out while waiting for systemd to remove kubepods-burstable-pod5ca3a5e5_2aab_4e5b_8756_2a725e8b3346.slice" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" podUID="5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" Dec 11 16:03:51 crc kubenswrapper[5120]: I1211 16:03:51.519688 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-9jr6t" Dec 11 16:03:51 crc kubenswrapper[5120]: I1211 16:03:51.551260 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9jr6t"] Dec 11 16:03:51 crc kubenswrapper[5120]: I1211 16:03:51.553555 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-9jr6t"] Dec 11 16:03:53 crc kubenswrapper[5120]: I1211 16:03:53.028247 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca3a5e5-2aab-4e5b-8756-2a725e8b3346" path="/var/lib/kubelet/pods/5ca3a5e5-2aab-4e5b-8756-2a725e8b3346/volumes" Dec 11 16:03:53 crc kubenswrapper[5120]: I1211 16:03:53.175216 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.225699 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78c449bf6-d25dp"] Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.226702 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" podUID="e5d71a09-c083-489f-bb0a-0df9a2cbca1a" containerName="controller-manager" containerID="cri-o://0c6fc272c96dd82736071918cebab96752043092c44bede42ee3d2502fef272c" gracePeriod=30 Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.239917 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24"] Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.240315 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" podUID="c2173574-1728-412b-a196-cda3303ddc5e" containerName="route-controller-manager" containerID="cri-o://55eb23d789ad04cbda7b8c2355b13b39d05eced6f99af26f0462b4b35c156aba" gracePeriod=30 Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.612588 5120 generic.go:358] "Generic (PLEG): container finished" podID="c2173574-1728-412b-a196-cda3303ddc5e" containerID="55eb23d789ad04cbda7b8c2355b13b39d05eced6f99af26f0462b4b35c156aba" exitCode=0 Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.612659 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" event={"ID":"c2173574-1728-412b-a196-cda3303ddc5e","Type":"ContainerDied","Data":"55eb23d789ad04cbda7b8c2355b13b39d05eced6f99af26f0462b4b35c156aba"} Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.615616 5120 generic.go:358] "Generic (PLEG): container finished" podID="e5d71a09-c083-489f-bb0a-0df9a2cbca1a" containerID="0c6fc272c96dd82736071918cebab96752043092c44bede42ee3d2502fef272c" exitCode=0 Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.615688 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" event={"ID":"e5d71a09-c083-489f-bb0a-0df9a2cbca1a","Type":"ContainerDied","Data":"0c6fc272c96dd82736071918cebab96752043092c44bede42ee3d2502fef272c"} Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.706006 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.729915 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854b65956-wqm84"] Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.730492 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2173574-1728-412b-a196-cda3303ddc5e" containerName="route-controller-manager" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.730511 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2173574-1728-412b-a196-cda3303ddc5e" containerName="route-controller-manager" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.730596 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2173574-1728-412b-a196-cda3303ddc5e" containerName="route-controller-manager" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.733765 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.746166 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854b65956-wqm84"] Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886035 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2173574-1728-412b-a196-cda3303ddc5e-serving-cert\") pod \"c2173574-1728-412b-a196-cda3303ddc5e\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886412 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-client-ca\") pod \"c2173574-1728-412b-a196-cda3303ddc5e\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886442 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-config\") pod \"c2173574-1728-412b-a196-cda3303ddc5e\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886492 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2173574-1728-412b-a196-cda3303ddc5e-tmp\") pod \"c2173574-1728-412b-a196-cda3303ddc5e\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886530 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfkgn\" (UniqueName: \"kubernetes.io/projected/c2173574-1728-412b-a196-cda3303ddc5e-kube-api-access-bfkgn\") pod \"c2173574-1728-412b-a196-cda3303ddc5e\" (UID: \"c2173574-1728-412b-a196-cda3303ddc5e\") " Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886682 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-serving-cert\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886782 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-client-ca\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886816 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gflnm\" (UniqueName: \"kubernetes.io/projected/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-kube-api-access-gflnm\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886929 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-config\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.886944 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-tmp\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.887181 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-client-ca" (OuterVolumeSpecName: "client-ca") pod "c2173574-1728-412b-a196-cda3303ddc5e" (UID: "c2173574-1728-412b-a196-cda3303ddc5e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.887198 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2173574-1728-412b-a196-cda3303ddc5e-tmp" (OuterVolumeSpecName: "tmp") pod "c2173574-1728-412b-a196-cda3303ddc5e" (UID: "c2173574-1728-412b-a196-cda3303ddc5e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.887347 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-config" (OuterVolumeSpecName: "config") pod "c2173574-1728-412b-a196-cda3303ddc5e" (UID: "c2173574-1728-412b-a196-cda3303ddc5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.896232 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2173574-1728-412b-a196-cda3303ddc5e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c2173574-1728-412b-a196-cda3303ddc5e" (UID: "c2173574-1728-412b-a196-cda3303ddc5e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.901287 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2173574-1728-412b-a196-cda3303ddc5e-kube-api-access-bfkgn" (OuterVolumeSpecName: "kube-api-access-bfkgn") pod "c2173574-1728-412b-a196-cda3303ddc5e" (UID: "c2173574-1728-412b-a196-cda3303ddc5e"). InnerVolumeSpecName "kube-api-access-bfkgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.925677 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.971703 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78477b69c6-9nnpt"] Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.972261 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5d71a09-c083-489f-bb0a-0df9a2cbca1a" containerName="controller-manager" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.972282 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d71a09-c083-489f-bb0a-0df9a2cbca1a" containerName="controller-manager" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.972399 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e5d71a09-c083-489f-bb0a-0df9a2cbca1a" containerName="controller-manager" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.979023 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.984279 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78477b69c6-9nnpt"] Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.988628 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-config\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.988691 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-tmp\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.988862 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-serving-cert\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.989052 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-client-ca\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.989113 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gflnm\" (UniqueName: \"kubernetes.io/projected/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-kube-api-access-gflnm\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.989342 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2173574-1728-412b-a196-cda3303ddc5e-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.989377 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bfkgn\" (UniqueName: \"kubernetes.io/projected/c2173574-1728-412b-a196-cda3303ddc5e-kube-api-access-bfkgn\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.989393 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2173574-1728-412b-a196-cda3303ddc5e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.989405 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.989415 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2173574-1728-412b-a196-cda3303ddc5e-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.989448 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-tmp\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.990350 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-client-ca\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.990544 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-config\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:08 crc kubenswrapper[5120]: I1211 16:04:08.993898 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-serving-cert\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.011564 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gflnm\" (UniqueName: \"kubernetes.io/projected/4bc6fb45-b49c-498c-99a8-f23fcbcd87a9-kube-api-access-gflnm\") pod \"route-controller-manager-854b65956-wqm84\" (UID: \"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9\") " pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.051710 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.090748 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5pq8\" (UniqueName: \"kubernetes.io/projected/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-kube-api-access-m5pq8\") pod \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.090977 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-proxy-ca-bundles\") pod \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091103 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-tmp\") pod \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091300 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-config\") pod \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091386 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-client-ca\") pod \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091452 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-tmp" (OuterVolumeSpecName: "tmp") pod "e5d71a09-c083-489f-bb0a-0df9a2cbca1a" (UID: "e5d71a09-c083-489f-bb0a-0df9a2cbca1a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091557 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-serving-cert\") pod \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\" (UID: \"e5d71a09-c083-489f-bb0a-0df9a2cbca1a\") " Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091723 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5efef48-0dee-448a-b35c-d64ed0760757-serving-cert\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091810 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5efef48-0dee-448a-b35c-d64ed0760757-tmp\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091718 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e5d71a09-c083-489f-bb0a-0df9a2cbca1a" (UID: "e5d71a09-c083-489f-bb0a-0df9a2cbca1a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091832 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-config" (OuterVolumeSpecName: "config") pod "e5d71a09-c083-489f-bb0a-0df9a2cbca1a" (UID: "e5d71a09-c083-489f-bb0a-0df9a2cbca1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091899 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5d71a09-c083-489f-bb0a-0df9a2cbca1a" (UID: "e5d71a09-c083-489f-bb0a-0df9a2cbca1a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.091924 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktwhl\" (UniqueName: \"kubernetes.io/projected/e5efef48-0dee-448a-b35c-d64ed0760757-kube-api-access-ktwhl\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.092145 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5efef48-0dee-448a-b35c-d64ed0760757-config\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.092328 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5efef48-0dee-448a-b35c-d64ed0760757-client-ca\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.092396 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5efef48-0dee-448a-b35c-d64ed0760757-proxy-ca-bundles\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.092525 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.092549 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.092564 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.092576 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.095299 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5d71a09-c083-489f-bb0a-0df9a2cbca1a" (UID: "e5d71a09-c083-489f-bb0a-0df9a2cbca1a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.095345 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-kube-api-access-m5pq8" (OuterVolumeSpecName: "kube-api-access-m5pq8") pod "e5d71a09-c083-489f-bb0a-0df9a2cbca1a" (UID: "e5d71a09-c083-489f-bb0a-0df9a2cbca1a"). InnerVolumeSpecName "kube-api-access-m5pq8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.193921 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5efef48-0dee-448a-b35c-d64ed0760757-client-ca\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.193987 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5efef48-0dee-448a-b35c-d64ed0760757-proxy-ca-bundles\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.194035 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5efef48-0dee-448a-b35c-d64ed0760757-serving-cert\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.194069 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5efef48-0dee-448a-b35c-d64ed0760757-tmp\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.194119 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ktwhl\" (UniqueName: \"kubernetes.io/projected/e5efef48-0dee-448a-b35c-d64ed0760757-kube-api-access-ktwhl\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.194173 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5efef48-0dee-448a-b35c-d64ed0760757-config\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.194286 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5pq8\" (UniqueName: \"kubernetes.io/projected/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-kube-api-access-m5pq8\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.194299 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d71a09-c083-489f-bb0a-0df9a2cbca1a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.195491 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5efef48-0dee-448a-b35c-d64ed0760757-client-ca\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.195558 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5efef48-0dee-448a-b35c-d64ed0760757-tmp\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.196328 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5efef48-0dee-448a-b35c-d64ed0760757-config\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.196481 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5efef48-0dee-448a-b35c-d64ed0760757-proxy-ca-bundles\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.199424 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5efef48-0dee-448a-b35c-d64ed0760757-serving-cert\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.212355 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktwhl\" (UniqueName: \"kubernetes.io/projected/e5efef48-0dee-448a-b35c-d64ed0760757-kube-api-access-ktwhl\") pod \"controller-manager-78477b69c6-9nnpt\" (UID: \"e5efef48-0dee-448a-b35c-d64ed0760757\") " pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.247432 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854b65956-wqm84"] Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.304196 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.545593 5120 ???:1] "http: TLS handshake error from 192.168.126.11:37210: no serving certificate available for the kubelet" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.622820 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" event={"ID":"e5d71a09-c083-489f-bb0a-0df9a2cbca1a","Type":"ContainerDied","Data":"9230fdc6f23b04776ebf764bd9a6917e671c121468a693a57f8a70b4294ec175"} Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.622884 5120 scope.go:117] "RemoveContainer" containerID="0c6fc272c96dd82736071918cebab96752043092c44bede42ee3d2502fef272c" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.623013 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c449bf6-d25dp" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.627841 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" event={"ID":"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9","Type":"ContainerStarted","Data":"921487ddca6c6fb8f3b51cf4bd2dd7d968e309f22f8e3dca1ffeb34bc63b615d"} Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.627886 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" event={"ID":"4bc6fb45-b49c-498c-99a8-f23fcbcd87a9","Type":"ContainerStarted","Data":"227d35183be0a14f1fc43298dd812dc78027677539c771a440b0b4e7a6ffa118"} Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.629186 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.632286 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" event={"ID":"c2173574-1728-412b-a196-cda3303ddc5e","Type":"ContainerDied","Data":"18e934d1f7607e1bb5fc26f5b3f24fea9eb9637f409e62a1de2f09d8758f957a"} Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.632299 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.643960 5120 scope.go:117] "RemoveContainer" containerID="55eb23d789ad04cbda7b8c2355b13b39d05eced6f99af26f0462b4b35c156aba" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.644911 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" podStartSLOduration=1.644893913 podStartE2EDuration="1.644893913s" podCreationTimestamp="2025-12-11 16:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:04:09.644129261 +0000 UTC m=+198.898432592" watchObservedRunningTime="2025-12-11 16:04:09.644893913 +0000 UTC m=+198.899197244" Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.670737 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24"] Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.670789 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ff8c7849-4mr24"] Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.674047 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78c449bf6-d25dp"] Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.676103 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78c449bf6-d25dp"] Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.701647 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78477b69c6-9nnpt"] Dec 11 16:04:09 crc kubenswrapper[5120]: I1211 16:04:09.885909 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-854b65956-wqm84" Dec 11 16:04:10 crc kubenswrapper[5120]: I1211 16:04:10.642038 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" event={"ID":"e5efef48-0dee-448a-b35c-d64ed0760757","Type":"ContainerStarted","Data":"48a6bcde5f70344f03624ef1a6465eeae9c61f21a64e2c25d9c9f7484121023f"} Dec 11 16:04:10 crc kubenswrapper[5120]: I1211 16:04:10.642073 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" event={"ID":"e5efef48-0dee-448a-b35c-d64ed0760757","Type":"ContainerStarted","Data":"e166281ca0b94272bdfe6823a5c45a5ec06e2a5f70649c39764579bc3392b8f0"} Dec 11 16:04:10 crc kubenswrapper[5120]: I1211 16:04:10.642374 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:10 crc kubenswrapper[5120]: I1211 16:04:10.966401 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" Dec 11 16:04:10 crc kubenswrapper[5120]: I1211 16:04:10.983348 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78477b69c6-9nnpt" podStartSLOduration=2.983324706 podStartE2EDuration="2.983324706s" podCreationTimestamp="2025-12-11 16:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:04:10.658409409 +0000 UTC m=+199.912712740" watchObservedRunningTime="2025-12-11 16:04:10.983324706 +0000 UTC m=+200.237628047" Dec 11 16:04:11 crc kubenswrapper[5120]: I1211 16:04:11.029865 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2173574-1728-412b-a196-cda3303ddc5e" path="/var/lib/kubelet/pods/c2173574-1728-412b-a196-cda3303ddc5e/volumes" Dec 11 16:04:11 crc kubenswrapper[5120]: I1211 16:04:11.030454 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d71a09-c083-489f-bb0a-0df9a2cbca1a" path="/var/lib/kubelet/pods/e5d71a09-c083-489f-bb0a-0df9a2cbca1a/volumes" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.075238 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.085634 5120 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.085675 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.085873 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086274 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9" gracePeriod=15 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086385 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824" gracePeriod=15 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086464 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086491 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086499 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086506 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086514 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086520 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086537 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086394 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79" gracePeriod=15 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086543 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086632 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086650 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086389 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a" gracePeriod=15 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086678 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086773 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086813 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086825 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086838 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086430 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97" gracePeriod=15 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086847 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086959 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.086974 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087179 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087196 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087207 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087217 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087228 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087236 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087249 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087259 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087268 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087388 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.087398 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.090656 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.111709 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: E1211 16:04:21.115703 5120 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.152853 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.152905 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.152945 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.153127 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.153223 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.153254 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.153279 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.153378 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.153474 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.153584 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.254723 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.254873 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.254913 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.254944 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255015 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255121 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255203 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255263 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255312 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255343 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255415 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255468 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255500 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255523 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255599 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255618 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255562 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255575 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.255583 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.416640 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: E1211 16:04:21.441708 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188034c1ed261ea5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:04:21.441265317 +0000 UTC m=+210.695568658,LastTimestamp:2025-12-11 16:04:21.441265317 +0000 UTC m=+210.695568658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.718548 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35"} Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.718595 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"04540449319efca2a74bbf158b24b0138b2f51baad41b0cbc01f44b21e59e3ba"} Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.718849 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: E1211 16:04:21.719681 5120 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.720471 5120 generic.go:358] "Generic (PLEG): container finished" podID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" containerID="e285d1bb257a9071a1480e0c5a10a65fe338b11dbfec206177bd2059a862d066" exitCode=0 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.720556 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"61cdc95b-8445-4386-87b3-a5a6c1ef5409","Type":"ContainerDied","Data":"e285d1bb257a9071a1480e0c5a10a65fe338b11dbfec206177bd2059a862d066"} Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.720962 5120 status_manager.go:895] "Failed to get status for pod" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.722077 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.724203 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.724749 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824" exitCode=0 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.724771 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97" exitCode=0 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.724779 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a" exitCode=0 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.724787 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79" exitCode=2 Dec 11 16:04:21 crc kubenswrapper[5120]: I1211 16:04:21.724812 5120 scope.go:117] "RemoveContainer" containerID="7a41fbe2b0881e86b16c4ddd845a97a6f0fe9b72c6b542e1e379a369c26766ad" Dec 11 16:04:22 crc kubenswrapper[5120]: I1211 16:04:22.732242 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.043641 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.044230 5120 status_manager.go:895] "Failed to get status for pod" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.078201 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kubelet-dir\") pod \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.078311 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "61cdc95b-8445-4386-87b3-a5a6c1ef5409" (UID: "61cdc95b-8445-4386-87b3-a5a6c1ef5409"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.078622 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kube-api-access\") pod \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.078664 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-var-lock\") pod \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\" (UID: \"61cdc95b-8445-4386-87b3-a5a6c1ef5409\") " Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.078811 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-var-lock" (OuterVolumeSpecName: "var-lock") pod "61cdc95b-8445-4386-87b3-a5a6c1ef5409" (UID: "61cdc95b-8445-4386-87b3-a5a6c1ef5409"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.079113 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.079141 5120 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61cdc95b-8445-4386-87b3-a5a6c1ef5409-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.084485 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "61cdc95b-8445-4386-87b3-a5a6c1ef5409" (UID: "61cdc95b-8445-4386-87b3-a5a6c1ef5409"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.200455 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61cdc95b-8445-4386-87b3-a5a6c1ef5409-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.472197 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.473213 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.474010 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.474486 5120 status_manager.go:895] "Failed to get status for pod" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.604910 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.604994 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605092 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605129 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605203 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605230 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605276 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605306 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605572 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605827 5120 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605858 5120 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605871 5120 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.605883 5120 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.606711 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.707289 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.740566 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.741249 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9" exitCode=0 Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.741425 5120 scope.go:117] "RemoveContainer" containerID="a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.741451 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.743169 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"61cdc95b-8445-4386-87b3-a5a6c1ef5409","Type":"ContainerDied","Data":"9af363072b40cee21c1984ef46472cbc501d585365b1967bc3370f7399f64793"} Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.743209 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af363072b40cee21c1984ef46472cbc501d585365b1967bc3370f7399f64793" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.743275 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.756257 5120 scope.go:117] "RemoveContainer" containerID="667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.761432 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.761878 5120 status_manager.go:895] "Failed to get status for pod" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.762375 5120 status_manager.go:895] "Failed to get status for pod" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.762896 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.768383 5120 scope.go:117] "RemoveContainer" containerID="a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.779841 5120 scope.go:117] "RemoveContainer" containerID="82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.795827 5120 scope.go:117] "RemoveContainer" containerID="137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.817564 5120 scope.go:117] "RemoveContainer" containerID="39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.886588 5120 scope.go:117] "RemoveContainer" containerID="a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824" Dec 11 16:04:23 crc kubenswrapper[5120]: E1211 16:04:23.887054 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824\": container with ID starting with a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824 not found: ID does not exist" containerID="a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.887083 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824"} err="failed to get container status \"a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824\": rpc error: code = NotFound desc = could not find container \"a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824\": container with ID starting with a663f4dad2bc6a5fe1ec338c1f0a64a0d6d616647dd0615b10a021db83128824 not found: ID does not exist" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.887119 5120 scope.go:117] "RemoveContainer" containerID="667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97" Dec 11 16:04:23 crc kubenswrapper[5120]: E1211 16:04:23.887443 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97\": container with ID starting with 667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97 not found: ID does not exist" containerID="667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.887467 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97"} err="failed to get container status \"667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97\": rpc error: code = NotFound desc = could not find container \"667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97\": container with ID starting with 667d0fa793fa7a1f4bd25f2d9712e1904e20430fe61ba718ced9f03551336e97 not found: ID does not exist" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.887493 5120 scope.go:117] "RemoveContainer" containerID="a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a" Dec 11 16:04:23 crc kubenswrapper[5120]: E1211 16:04:23.889375 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a\": container with ID starting with a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a not found: ID does not exist" containerID="a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.889447 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a"} err="failed to get container status \"a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a\": rpc error: code = NotFound desc = could not find container \"a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a\": container with ID starting with a7d1c45dc8b53e74445e58caf0d0fbb1af46161d873688e7f02b38fd0428ed6a not found: ID does not exist" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.889489 5120 scope.go:117] "RemoveContainer" containerID="82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79" Dec 11 16:04:23 crc kubenswrapper[5120]: E1211 16:04:23.889837 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79\": container with ID starting with 82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79 not found: ID does not exist" containerID="82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.889862 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79"} err="failed to get container status \"82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79\": rpc error: code = NotFound desc = could not find container \"82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79\": container with ID starting with 82e5abe2e48b9bc24be2a124ddb74d73753e6d23955cfc138efb98d4bd6f1d79 not found: ID does not exist" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.889877 5120 scope.go:117] "RemoveContainer" containerID="137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9" Dec 11 16:04:23 crc kubenswrapper[5120]: E1211 16:04:23.890317 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9\": container with ID starting with 137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9 not found: ID does not exist" containerID="137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.890368 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9"} err="failed to get container status \"137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9\": rpc error: code = NotFound desc = could not find container \"137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9\": container with ID starting with 137385641c3a889a062861c4d4c5e74639a19c7146eb48b5aa38b856f33f74b9 not found: ID does not exist" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.890402 5120 scope.go:117] "RemoveContainer" containerID="39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d" Dec 11 16:04:23 crc kubenswrapper[5120]: E1211 16:04:23.890962 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d\": container with ID starting with 39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d not found: ID does not exist" containerID="39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d" Dec 11 16:04:23 crc kubenswrapper[5120]: I1211 16:04:23.891005 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d"} err="failed to get container status \"39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d\": rpc error: code = NotFound desc = could not find container \"39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d\": container with ID starting with 39adff1a81aa61a71c26a3b775b5dd302d606e87769a7a0cb2228b80c99b5b3d not found: ID does not exist" Dec 11 16:04:24 crc kubenswrapper[5120]: E1211 16:04:24.419981 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188034c1ed261ea5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 16:04:21.441265317 +0000 UTC m=+210.695568658,LastTimestamp:2025-12-11 16:04:21.441265317 +0000 UTC m=+210.695568658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 16:04:25 crc kubenswrapper[5120]: I1211 16:04:25.027650 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 11 16:04:28 crc kubenswrapper[5120]: E1211 16:04:28.656450 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:28 crc kubenswrapper[5120]: E1211 16:04:28.657263 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:28 crc kubenswrapper[5120]: E1211 16:04:28.657585 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:28 crc kubenswrapper[5120]: E1211 16:04:28.657883 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:28 crc kubenswrapper[5120]: E1211 16:04:28.658178 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:28 crc kubenswrapper[5120]: I1211 16:04:28.658204 5120 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 11 16:04:28 crc kubenswrapper[5120]: E1211 16:04:28.658444 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="200ms" Dec 11 16:04:28 crc kubenswrapper[5120]: I1211 16:04:28.717969 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:04:28 crc kubenswrapper[5120]: I1211 16:04:28.718087 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:04:28 crc kubenswrapper[5120]: E1211 16:04:28.859118 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="400ms" Dec 11 16:04:29 crc kubenswrapper[5120]: E1211 16:04:29.260721 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="800ms" Dec 11 16:04:30 crc kubenswrapper[5120]: E1211 16:04:30.061578 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="1.6s" Dec 11 16:04:31 crc kubenswrapper[5120]: I1211 16:04:31.025108 5120 status_manager.go:895] "Failed to get status for pod" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:31 crc kubenswrapper[5120]: E1211 16:04:31.663045 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" interval="3.2s" Dec 11 16:04:32 crc kubenswrapper[5120]: E1211 16:04:32.039044 5120 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" volumeName="registry-storage" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.026879 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.027808 5120 status_manager.go:895] "Failed to get status for pod" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.040365 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7eccf842-c196-44ed-bae9-137577128c33" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.040398 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7eccf842-c196-44ed-bae9-137577128c33" Dec 11 16:04:33 crc kubenswrapper[5120]: E1211 16:04:33.040713 5120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.040954 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:33 crc kubenswrapper[5120]: W1211 16:04:33.064914 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-6bd15d43d03543da96151cdedf0701aaeaf98ef3be30e667f92dbd63d26ebadd WatchSource:0}: Error finding container 6bd15d43d03543da96151cdedf0701aaeaf98ef3be30e667f92dbd63d26ebadd: Status 404 returned error can't find the container with id 6bd15d43d03543da96151cdedf0701aaeaf98ef3be30e667f92dbd63d26ebadd Dec 11 16:04:33 crc kubenswrapper[5120]: E1211 16:04:33.261855 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:04:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:04:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:04:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T16:04:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:33 crc kubenswrapper[5120]: E1211 16:04:33.263411 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:33 crc kubenswrapper[5120]: E1211 16:04:33.263669 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:33 crc kubenswrapper[5120]: E1211 16:04:33.263855 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:33 crc kubenswrapper[5120]: E1211 16:04:33.264372 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:33 crc kubenswrapper[5120]: E1211 16:04:33.264405 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.796941 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.796984 5120 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="6638d34ff1843e072e0e07aee8955f1642fc6ed722b30a744affaca24191a467" exitCode=1 Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.797094 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"6638d34ff1843e072e0e07aee8955f1642fc6ed722b30a744affaca24191a467"} Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.797692 5120 scope.go:117] "RemoveContainer" containerID="6638d34ff1843e072e0e07aee8955f1642fc6ed722b30a744affaca24191a467" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.798163 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.798428 5120 status_manager.go:895] "Failed to get status for pod" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.798955 5120 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="ef8dd086510659b728401877f4573201bf314243760cdc5c9e798bbf9e859842" exitCode=0 Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.799040 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"ef8dd086510659b728401877f4573201bf314243760cdc5c9e798bbf9e859842"} Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.799059 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6bd15d43d03543da96151cdedf0701aaeaf98ef3be30e667f92dbd63d26ebadd"} Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.799302 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7eccf842-c196-44ed-bae9-137577128c33" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.799317 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7eccf842-c196-44ed-bae9-137577128c33" Dec 11 16:04:33 crc kubenswrapper[5120]: E1211 16:04:33.799540 5120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.799769 5120 status_manager.go:895] "Failed to get status for pod" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:33 crc kubenswrapper[5120]: I1211 16:04:33.800112 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.12:6443: connect: connection refused" Dec 11 16:04:34 crc kubenswrapper[5120]: I1211 16:04:34.825974 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"770538af4b9c9a57777ef6113e3708bc1f7583083ca87876127f09d6f5482e89"} Dec 11 16:04:34 crc kubenswrapper[5120]: I1211 16:04:34.826356 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5c987da5bf958a9f9d4aa172f6099d98437d1bb8c78ca223a8960627fa4dd9b7"} Dec 11 16:04:34 crc kubenswrapper[5120]: I1211 16:04:34.826371 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"63bacc5669d33b51abe2b3eba2a4a1c87611bacd2e8ff35c786152977a9b6977"} Dec 11 16:04:34 crc kubenswrapper[5120]: I1211 16:04:34.826381 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"54968636e66a6d738e59de277c925e007794398c5193a258c60d065c28695e92"} Dec 11 16:04:34 crc kubenswrapper[5120]: I1211 16:04:34.835935 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:04:34 crc kubenswrapper[5120]: I1211 16:04:34.836082 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6d786d0c631d13ecc0cfdf6b7a47805c2beb5ad41db35a68d5a5915ed2022810"} Dec 11 16:04:35 crc kubenswrapper[5120]: I1211 16:04:35.843384 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"54b142ba183f8c05496cb63cd3529e3d762e7a7a5e141ef4f6d385c9a5758a53"} Dec 11 16:04:35 crc kubenswrapper[5120]: I1211 16:04:35.843814 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:35 crc kubenswrapper[5120]: I1211 16:04:35.844009 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7eccf842-c196-44ed-bae9-137577128c33" Dec 11 16:04:35 crc kubenswrapper[5120]: I1211 16:04:35.844051 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7eccf842-c196-44ed-bae9-137577128c33" Dec 11 16:04:38 crc kubenswrapper[5120]: I1211 16:04:38.041707 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:38 crc kubenswrapper[5120]: I1211 16:04:38.042703 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:38 crc kubenswrapper[5120]: I1211 16:04:38.047406 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:39 crc kubenswrapper[5120]: I1211 16:04:39.324512 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:04:39 crc kubenswrapper[5120]: I1211 16:04:39.334709 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:04:39 crc kubenswrapper[5120]: I1211 16:04:39.864356 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:04:40 crc kubenswrapper[5120]: I1211 16:04:40.861918 5120 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:40 crc kubenswrapper[5120]: I1211 16:04:40.861953 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:04:41 crc kubenswrapper[5120]: I1211 16:04:41.040995 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="bf306ff7-7363-4d49-9b10-f92500c1677a" Dec 11 16:04:41 crc kubenswrapper[5120]: I1211 16:04:41.872551 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7eccf842-c196-44ed-bae9-137577128c33" Dec 11 16:04:41 crc kubenswrapper[5120]: I1211 16:04:41.872851 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7eccf842-c196-44ed-bae9-137577128c33" Dec 11 16:04:41 crc kubenswrapper[5120]: I1211 16:04:41.875749 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="bf306ff7-7363-4d49-9b10-f92500c1677a" Dec 11 16:04:50 crc kubenswrapper[5120]: I1211 16:04:50.873310 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 16:04:51 crc kubenswrapper[5120]: I1211 16:04:51.289243 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 11 16:04:51 crc kubenswrapper[5120]: I1211 16:04:51.424260 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 11 16:04:51 crc kubenswrapper[5120]: I1211 16:04:51.586551 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 11 16:04:52 crc kubenswrapper[5120]: I1211 16:04:52.372041 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 11 16:04:52 crc kubenswrapper[5120]: I1211 16:04:52.399808 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 11 16:04:53 crc kubenswrapper[5120]: I1211 16:04:53.027261 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 11 16:04:53 crc kubenswrapper[5120]: I1211 16:04:53.303842 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 11 16:04:53 crc kubenswrapper[5120]: I1211 16:04:53.435572 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 11 16:04:53 crc kubenswrapper[5120]: I1211 16:04:53.832855 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 11 16:04:53 crc kubenswrapper[5120]: I1211 16:04:53.867784 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.060662 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.072783 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.102650 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.347714 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.377713 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.377723 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.454051 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.480628 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.493247 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.835825 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.847988 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.926781 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 11 16:04:54 crc kubenswrapper[5120]: I1211 16:04:54.985597 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.071084 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.191311 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.260942 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.383437 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.397107 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.453132 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.472953 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.532096 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.725212 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.739182 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.758596 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.774412 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.785415 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:04:55 crc kubenswrapper[5120]: I1211 16:04:55.992061 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.040935 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.072708 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.306122 5120 ???:1] "http: TLS handshake error from 192.168.126.11:58326: no serving certificate available for the kubelet" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.334272 5120 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.399381 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.423707 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.436028 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.460508 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.508367 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.544833 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.561629 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.570906 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.596007 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.651330 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.691356 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.720894 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.822241 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.881364 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.925176 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.975823 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 11 16:04:56 crc kubenswrapper[5120]: I1211 16:04:56.983308 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.022560 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.046039 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.056641 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.091318 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.123644 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.150076 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.250011 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.298964 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.353644 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.381077 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.400401 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.408034 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.437771 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.439225 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.446105 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.485425 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.600209 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.661454 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.698548 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.702002 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.776399 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.823276 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.866964 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.867038 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 11 16:04:57 crc kubenswrapper[5120]: I1211 16:04:57.993573 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.032199 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.048019 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.058042 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.076327 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.084464 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.172790 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.219133 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.497300 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.499466 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.509397 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.533979 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.648916 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.664368 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.688096 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.688356 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.703646 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.717495 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.717592 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.786507 5120 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.807996 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.817411 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.818216 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.821053 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.868547 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 11 16:04:58 crc kubenswrapper[5120]: I1211 16:04:58.972184 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.013066 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.054477 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.162233 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.254478 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.268090 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.291964 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.371636 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.374887 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.453228 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.537478 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.722431 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.779685 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.916383 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.944500 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.951349 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 11 16:04:59 crc kubenswrapper[5120]: I1211 16:04:59.954212 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.103615 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.112423 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.224051 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.306558 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.356160 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.414577 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.435219 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.461778 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.485558 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.487895 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.535268 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.552793 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.625772 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.838297 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.908228 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.921208 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 11 16:05:00 crc kubenswrapper[5120]: I1211 16:05:00.987427 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.047235 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.074716 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.088749 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.103759 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.244773 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.267971 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.273120 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.371857 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.403344 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.414521 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.457402 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.554236 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.628549 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.640024 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.683284 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.701620 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 11 16:05:01 crc kubenswrapper[5120]: I1211 16:05:01.998015 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.011909 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.113007 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.125035 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.242556 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.272733 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.277944 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.319237 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.403014 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.466960 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.480049 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.499116 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.553874 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.587906 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.655190 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.657459 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.679674 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.716839 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.736846 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.788833 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.831488 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.855681 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.862693 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.900001 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.910591 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 11 16:05:02 crc kubenswrapper[5120]: I1211 16:05:02.957191 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.065270 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.123325 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.125739 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.208729 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.230664 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.264549 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.289206 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.420530 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.470779 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.480339 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.482598 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.640717 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.643785 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.660082 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.714967 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.782080 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.797297 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.882072 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.887194 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 11 16:05:03 crc kubenswrapper[5120]: I1211 16:05:03.928136 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.049282 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.218779 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.282575 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.443642 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.445630 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.467361 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.525126 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.570223 5120 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.580332 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.596842 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.710931 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.740639 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.753685 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.805751 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.850439 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.872222 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.937258 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 11 16:05:04 crc kubenswrapper[5120]: I1211 16:05:04.982232 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.031688 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.143370 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.168130 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.223927 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.241373 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.353477 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.388440 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.416840 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.488296 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.511879 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.514702 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.705216 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.778090 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.900844 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:05:05 crc kubenswrapper[5120]: I1211 16:05:05.967424 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.154135 5120 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.158369 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.158412 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.162552 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.162958 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.175559 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.175544194 podStartE2EDuration="26.175544194s" podCreationTimestamp="2025-12-11 16:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:05:06.17341623 +0000 UTC m=+255.427719561" watchObservedRunningTime="2025-12-11 16:05:06.175544194 +0000 UTC m=+255.429847525" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.274418 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.346056 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.424210 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.430293 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.590297 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.607662 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.725868 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.885575 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 11 16:05:06 crc kubenswrapper[5120]: I1211 16:05:06.923228 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 11 16:05:07 crc kubenswrapper[5120]: I1211 16:05:07.300951 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 11 16:05:07 crc kubenswrapper[5120]: I1211 16:05:07.312569 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 11 16:05:07 crc kubenswrapper[5120]: I1211 16:05:07.324167 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 16:05:07 crc kubenswrapper[5120]: I1211 16:05:07.841794 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 11 16:05:08 crc kubenswrapper[5120]: I1211 16:05:08.071656 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 11 16:05:08 crc kubenswrapper[5120]: I1211 16:05:08.256630 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 11 16:05:08 crc kubenswrapper[5120]: I1211 16:05:08.614714 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 11 16:05:08 crc kubenswrapper[5120]: I1211 16:05:08.983595 5120 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 11 16:05:09 crc kubenswrapper[5120]: I1211 16:05:09.372466 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 11 16:05:10 crc kubenswrapper[5120]: I1211 16:05:10.542911 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 11 16:05:10 crc kubenswrapper[5120]: I1211 16:05:10.761693 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.591810 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c2744"] Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.592726 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c2744" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerName="registry-server" containerID="cri-o://46f0ff1ecb1b24ef93c6f8733a8df6816929703c575f63c17636985b942ce10f" gracePeriod=30 Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.598092 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj8n4"] Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.598430 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rj8n4" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerName="registry-server" containerID="cri-o://811c6c11294d25f9f12404e73ed117bbc3cea27e577cc4231412ac5cac3cb4e9" gracePeriod=30 Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.605649 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-q29gs"] Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.605932 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" podUID="09d4a454-c53e-446e-9c58-ace5cef3d494" containerName="marketplace-operator" containerID="cri-o://c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221" gracePeriod=30 Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.624106 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2fzs"] Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.624482 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l2fzs" podUID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerName="registry-server" containerID="cri-o://7359f29e456d4f6dae4bc8a1c5b8a8372face338eb63dfcc88d11f89d667f035" gracePeriod=30 Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.630310 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58qrd"] Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.630641 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-58qrd" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" containerName="registry-server" containerID="cri-o://fc42eadd4bcefe72543e0015553167e6ad14c08692c134e9b0224e7f01036aea" gracePeriod=30 Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.648629 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-q4nvh"] Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.649871 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" containerName="installer" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.649888 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" containerName="installer" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.650021 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="61cdc95b-8445-4386-87b3-a5a6c1ef5409" containerName="installer" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.668311 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.676375 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-q4nvh"] Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.762020 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s557f\" (UniqueName: \"kubernetes.io/projected/b75b255c-d590-49d5-abc0-84ff933c0ca2-kube-api-access-s557f\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.762093 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b75b255c-d590-49d5-abc0-84ff933c0ca2-tmp\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.762122 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75b255c-d590-49d5-abc0-84ff933c0ca2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.762195 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b75b255c-d590-49d5-abc0-84ff933c0ca2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.863193 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b75b255c-d590-49d5-abc0-84ff933c0ca2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.863271 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s557f\" (UniqueName: \"kubernetes.io/projected/b75b255c-d590-49d5-abc0-84ff933c0ca2-kube-api-access-s557f\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.863303 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b75b255c-d590-49d5-abc0-84ff933c0ca2-tmp\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.863320 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75b255c-d590-49d5-abc0-84ff933c0ca2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.864549 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b75b255c-d590-49d5-abc0-84ff933c0ca2-tmp\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.864691 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b75b255c-d590-49d5-abc0-84ff933c0ca2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.876308 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b75b255c-d590-49d5-abc0-84ff933c0ca2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:11 crc kubenswrapper[5120]: I1211 16:05:11.882490 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s557f\" (UniqueName: \"kubernetes.io/projected/b75b255c-d590-49d5-abc0-84ff933c0ca2-kube-api-access-s557f\") pod \"marketplace-operator-547dbd544d-q4nvh\" (UID: \"b75b255c-d590-49d5-abc0-84ff933c0ca2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.005877 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.009644 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.031226 5120 generic.go:358] "Generic (PLEG): container finished" podID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerID="811c6c11294d25f9f12404e73ed117bbc3cea27e577cc4231412ac5cac3cb4e9" exitCode=0 Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.031293 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n4" event={"ID":"6ca05d96-1ede-4860-abf0-dda71706ae45","Type":"ContainerDied","Data":"811c6c11294d25f9f12404e73ed117bbc3cea27e577cc4231412ac5cac3cb4e9"} Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.032795 5120 generic.go:358] "Generic (PLEG): container finished" podID="09d4a454-c53e-446e-9c58-ace5cef3d494" containerID="c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221" exitCode=0 Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.032876 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" event={"ID":"09d4a454-c53e-446e-9c58-ace5cef3d494","Type":"ContainerDied","Data":"c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221"} Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.032896 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" event={"ID":"09d4a454-c53e-446e-9c58-ace5cef3d494","Type":"ContainerDied","Data":"bddff2d149bf6526b42eb0b69b0f00bfe59d535bc5de44242b6a729722292852"} Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.032916 5120 scope.go:117] "RemoveContainer" containerID="c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.033035 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-q29gs" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.044991 5120 generic.go:358] "Generic (PLEG): container finished" podID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerID="46f0ff1ecb1b24ef93c6f8733a8df6816929703c575f63c17636985b942ce10f" exitCode=0 Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.045099 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2744" event={"ID":"4b9fbe5e-2046-431a-af21-9bfbbbecf32b","Type":"ContainerDied","Data":"46f0ff1ecb1b24ef93c6f8733a8df6816929703c575f63c17636985b942ce10f"} Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.047230 5120 generic.go:358] "Generic (PLEG): container finished" podID="1d89b60e-d6e7-47df-898a-199387c5b767" containerID="fc42eadd4bcefe72543e0015553167e6ad14c08692c134e9b0224e7f01036aea" exitCode=0 Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.047350 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58qrd" event={"ID":"1d89b60e-d6e7-47df-898a-199387c5b767","Type":"ContainerDied","Data":"fc42eadd4bcefe72543e0015553167e6ad14c08692c134e9b0224e7f01036aea"} Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.050275 5120 generic.go:358] "Generic (PLEG): container finished" podID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerID="7359f29e456d4f6dae4bc8a1c5b8a8372face338eb63dfcc88d11f89d667f035" exitCode=0 Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.050310 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2fzs" event={"ID":"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02","Type":"ContainerDied","Data":"7359f29e456d4f6dae4bc8a1c5b8a8372face338eb63dfcc88d11f89d667f035"} Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.059244 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.069064 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09d4a454-c53e-446e-9c58-ace5cef3d494-tmp\") pod \"09d4a454-c53e-446e-9c58-ace5cef3d494\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.069142 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-operator-metrics\") pod \"09d4a454-c53e-446e-9c58-ace5cef3d494\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.069204 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h7x8\" (UniqueName: \"kubernetes.io/projected/09d4a454-c53e-446e-9c58-ace5cef3d494-kube-api-access-8h7x8\") pod \"09d4a454-c53e-446e-9c58-ace5cef3d494\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.069245 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-trusted-ca\") pod \"09d4a454-c53e-446e-9c58-ace5cef3d494\" (UID: \"09d4a454-c53e-446e-9c58-ace5cef3d494\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.070470 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d4a454-c53e-446e-9c58-ace5cef3d494-tmp" (OuterVolumeSpecName: "tmp") pod "09d4a454-c53e-446e-9c58-ace5cef3d494" (UID: "09d4a454-c53e-446e-9c58-ace5cef3d494"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.071937 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.074236 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "09d4a454-c53e-446e-9c58-ace5cef3d494" (UID: "09d4a454-c53e-446e-9c58-ace5cef3d494"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.087073 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d4a454-c53e-446e-9c58-ace5cef3d494-kube-api-access-8h7x8" (OuterVolumeSpecName: "kube-api-access-8h7x8") pod "09d4a454-c53e-446e-9c58-ace5cef3d494" (UID: "09d4a454-c53e-446e-9c58-ace5cef3d494"). InnerVolumeSpecName "kube-api-access-8h7x8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.089333 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "09d4a454-c53e-446e-9c58-ace5cef3d494" (UID: "09d4a454-c53e-446e-9c58-ace5cef3d494"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.089451 5120 scope.go:117] "RemoveContainer" containerID="c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221" Dec 11 16:05:12 crc kubenswrapper[5120]: E1211 16:05:12.089965 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221\": container with ID starting with c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221 not found: ID does not exist" containerID="c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.090170 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221"} err="failed to get container status \"c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221\": rpc error: code = NotFound desc = could not find container \"c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221\": container with ID starting with c807a398116b049c7b5eac4ea99ee09c5562c9ba1845966dfcd7cc9319941221 not found: ID does not exist" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.104861 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.170087 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-utilities\") pod \"6ca05d96-1ede-4860-abf0-dda71706ae45\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.170185 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-catalog-content\") pod \"6ca05d96-1ede-4860-abf0-dda71706ae45\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.170234 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-utilities\") pod \"1d89b60e-d6e7-47df-898a-199387c5b767\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.170282 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpjxp\" (UniqueName: \"kubernetes.io/projected/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-kube-api-access-tpjxp\") pod \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.171683 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-utilities" (OuterVolumeSpecName: "utilities") pod "6ca05d96-1ede-4860-abf0-dda71706ae45" (UID: "6ca05d96-1ede-4860-abf0-dda71706ae45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.171708 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-utilities" (OuterVolumeSpecName: "utilities") pod "1d89b60e-d6e7-47df-898a-199387c5b767" (UID: "1d89b60e-d6e7-47df-898a-199387c5b767"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.172835 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-catalog-content\") pod \"1d89b60e-d6e7-47df-898a-199387c5b767\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.172889 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2lw5\" (UniqueName: \"kubernetes.io/projected/6ca05d96-1ede-4860-abf0-dda71706ae45-kube-api-access-x2lw5\") pod \"6ca05d96-1ede-4860-abf0-dda71706ae45\" (UID: \"6ca05d96-1ede-4860-abf0-dda71706ae45\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.172931 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-utilities\") pod \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.172962 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-catalog-content\") pod \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\" (UID: \"4b9fbe5e-2046-431a-af21-9bfbbbecf32b\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.172988 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqf7g\" (UniqueName: \"kubernetes.io/projected/1d89b60e-d6e7-47df-898a-199387c5b767-kube-api-access-xqf7g\") pod \"1d89b60e-d6e7-47df-898a-199387c5b767\" (UID: \"1d89b60e-d6e7-47df-898a-199387c5b767\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.173396 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.173421 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8h7x8\" (UniqueName: \"kubernetes.io/projected/09d4a454-c53e-446e-9c58-ace5cef3d494-kube-api-access-8h7x8\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.173434 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.173446 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09d4a454-c53e-446e-9c58-ace5cef3d494-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.173457 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.173468 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09d4a454-c53e-446e-9c58-ace5cef3d494-tmp\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.173836 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-utilities" (OuterVolumeSpecName: "utilities") pod "4b9fbe5e-2046-431a-af21-9bfbbbecf32b" (UID: "4b9fbe5e-2046-431a-af21-9bfbbbecf32b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.176019 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-kube-api-access-tpjxp" (OuterVolumeSpecName: "kube-api-access-tpjxp") pod "4b9fbe5e-2046-431a-af21-9bfbbbecf32b" (UID: "4b9fbe5e-2046-431a-af21-9bfbbbecf32b"). InnerVolumeSpecName "kube-api-access-tpjxp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.177161 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d89b60e-d6e7-47df-898a-199387c5b767-kube-api-access-xqf7g" (OuterVolumeSpecName: "kube-api-access-xqf7g") pod "1d89b60e-d6e7-47df-898a-199387c5b767" (UID: "1d89b60e-d6e7-47df-898a-199387c5b767"). InnerVolumeSpecName "kube-api-access-xqf7g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.177422 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.183359 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca05d96-1ede-4860-abf0-dda71706ae45-kube-api-access-x2lw5" (OuterVolumeSpecName: "kube-api-access-x2lw5") pod "6ca05d96-1ede-4860-abf0-dda71706ae45" (UID: "6ca05d96-1ede-4860-abf0-dda71706ae45"). InnerVolumeSpecName "kube-api-access-x2lw5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.228841 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b9fbe5e-2046-431a-af21-9bfbbbecf32b" (UID: "4b9fbe5e-2046-431a-af21-9bfbbbecf32b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.249624 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ca05d96-1ede-4860-abf0-dda71706ae45" (UID: "6ca05d96-1ede-4860-abf0-dda71706ae45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.274700 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf8cz\" (UniqueName: \"kubernetes.io/projected/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-kube-api-access-kf8cz\") pod \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.274782 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-catalog-content\") pod \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.274836 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-utilities\") pod \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\" (UID: \"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02\") " Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.274999 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca05d96-1ede-4860-abf0-dda71706ae45-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.275015 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tpjxp\" (UniqueName: \"kubernetes.io/projected/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-kube-api-access-tpjxp\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.275025 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x2lw5\" (UniqueName: \"kubernetes.io/projected/6ca05d96-1ede-4860-abf0-dda71706ae45-kube-api-access-x2lw5\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.275035 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.275042 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b9fbe5e-2046-431a-af21-9bfbbbecf32b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.275050 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqf7g\" (UniqueName: \"kubernetes.io/projected/1d89b60e-d6e7-47df-898a-199387c5b767-kube-api-access-xqf7g\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.275697 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-utilities" (OuterVolumeSpecName: "utilities") pod "ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" (UID: "ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.277491 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-kube-api-access-kf8cz" (OuterVolumeSpecName: "kube-api-access-kf8cz") pod "ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" (UID: "ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02"). InnerVolumeSpecName "kube-api-access-kf8cz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.286393 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" (UID: "ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.287211 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d89b60e-d6e7-47df-898a-199387c5b767" (UID: "1d89b60e-d6e7-47df-898a-199387c5b767"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.360593 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-q29gs"] Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.365171 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-q29gs"] Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.375985 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kf8cz\" (UniqueName: \"kubernetes.io/projected/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-kube-api-access-kf8cz\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.376010 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d89b60e-d6e7-47df-898a-199387c5b767-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.376019 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.376029 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:12 crc kubenswrapper[5120]: I1211 16:05:12.442480 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-q4nvh"] Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.028359 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d4a454-c53e-446e-9c58-ace5cef3d494" path="/var/lib/kubelet/pods/09d4a454-c53e-446e-9c58-ace5cef3d494/volumes" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.058236 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2fzs" event={"ID":"ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02","Type":"ContainerDied","Data":"43c426d272479e2be31914e1fafa02647d938951bcdf7a66d259bceb5d3afaac"} Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.058289 5120 scope.go:117] "RemoveContainer" containerID="7359f29e456d4f6dae4bc8a1c5b8a8372face338eb63dfcc88d11f89d667f035" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.058245 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2fzs" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.059842 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" event={"ID":"b75b255c-d590-49d5-abc0-84ff933c0ca2","Type":"ContainerStarted","Data":"a28fba04031da372cbafb23b9ed9497ca3a461ac9e0afd2075dcc08f2b3c919e"} Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.059887 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" event={"ID":"b75b255c-d590-49d5-abc0-84ff933c0ca2","Type":"ContainerStarted","Data":"3c3f6bb408e880c1e371049a769018ffad9ff9d23fec93ff6d9615042c6d1f96"} Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.060203 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.064351 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.064719 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj8n4" event={"ID":"6ca05d96-1ede-4860-abf0-dda71706ae45","Type":"ContainerDied","Data":"4c98e0434faeb17e75d4b037a2b07b7d146423f04cf3eefc1b84fe3bb7de8614"} Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.064811 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj8n4" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.070523 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2744" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.070877 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2744" event={"ID":"4b9fbe5e-2046-431a-af21-9bfbbbecf32b","Type":"ContainerDied","Data":"231e600ae4413dfe25753951051f36664e0c8d38ab97b6911b120c53e44d5c01"} Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.073626 5120 scope.go:117] "RemoveContainer" containerID="64aaeb12dacea3b86d543e74e13783b48421e006c2ce8b2031d78e20c350843b" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.081822 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58qrd" event={"ID":"1d89b60e-d6e7-47df-898a-199387c5b767","Type":"ContainerDied","Data":"36ed1aa715057a36f0cb02f23ba2f6853493e908e4b461793eb6b6d999d6a96f"} Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.081868 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58qrd" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.096623 5120 scope.go:117] "RemoveContainer" containerID="d87352da48a537efd746ac0a42c6ea4d532c289edb06f4ac00a2aa078875a6bd" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.106735 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-q4nvh" podStartSLOduration=2.106711153 podStartE2EDuration="2.106711153s" podCreationTimestamp="2025-12-11 16:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:05:13.081989815 +0000 UTC m=+262.336293166" watchObservedRunningTime="2025-12-11 16:05:13.106711153 +0000 UTC m=+262.361014474" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.115673 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c2744"] Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.122265 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c2744"] Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.123289 5120 scope.go:117] "RemoveContainer" containerID="811c6c11294d25f9f12404e73ed117bbc3cea27e577cc4231412ac5cac3cb4e9" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.123441 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2fzs"] Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.138354 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2fzs"] Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.141654 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj8n4"] Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.144604 5120 scope.go:117] "RemoveContainer" containerID="cbb4839380cf310d249f475596ac3d4ed1730b7ed49170badb43b3799372b60d" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.146262 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rj8n4"] Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.152322 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58qrd"] Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.155120 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-58qrd"] Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.158856 5120 scope.go:117] "RemoveContainer" containerID="1ac9fff7ebeb4d7ea33e808856c7cdf16b14fa1bbeabc4f4c741a127d503813f" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.179827 5120 scope.go:117] "RemoveContainer" containerID="46f0ff1ecb1b24ef93c6f8733a8df6816929703c575f63c17636985b942ce10f" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.193187 5120 scope.go:117] "RemoveContainer" containerID="ebc91c938ebbb4b6b502fab428b4f6e305258a941c2cbe2811f5ddb77751fd55" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.228942 5120 scope.go:117] "RemoveContainer" containerID="bed6b594111b98b816e74a60bc58558f72a4e56462f624eec3de4106729098b4" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.240438 5120 scope.go:117] "RemoveContainer" containerID="fc42eadd4bcefe72543e0015553167e6ad14c08692c134e9b0224e7f01036aea" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.252459 5120 scope.go:117] "RemoveContainer" containerID="fec841aa96421753527f98ca620853400bbfb304425ea5a16b4991dab1374357" Dec 11 16:05:13 crc kubenswrapper[5120]: I1211 16:05:13.265049 5120 scope.go:117] "RemoveContainer" containerID="ad9ab087692d4f2727617f8ecd68b3b20b0d51e60feb15fe877c25947edeb1de" Dec 11 16:05:14 crc kubenswrapper[5120]: I1211 16:05:14.477122 5120 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 16:05:14 crc kubenswrapper[5120]: I1211 16:05:14.477758 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35" gracePeriod=5 Dec 11 16:05:15 crc kubenswrapper[5120]: I1211 16:05:15.042735 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" path="/var/lib/kubelet/pods/1d89b60e-d6e7-47df-898a-199387c5b767/volumes" Dec 11 16:05:15 crc kubenswrapper[5120]: I1211 16:05:15.043520 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" path="/var/lib/kubelet/pods/4b9fbe5e-2046-431a-af21-9bfbbbecf32b/volumes" Dec 11 16:05:15 crc kubenswrapper[5120]: I1211 16:05:15.044170 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" path="/var/lib/kubelet/pods/6ca05d96-1ede-4860-abf0-dda71706ae45/volumes" Dec 11 16:05:15 crc kubenswrapper[5120]: I1211 16:05:15.045441 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" path="/var/lib/kubelet/pods/ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02/volumes" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.043006 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.043475 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.045073 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.121841 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.121894 5120 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35" exitCode=137 Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.122000 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.122039 5120 scope.go:117] "RemoveContainer" containerID="fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.128954 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129016 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129195 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129079 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129258 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129285 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129304 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129336 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129564 5120 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129578 5120 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129588 5120 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.129634 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.139061 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.139274 5120 scope.go:117] "RemoveContainer" containerID="fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35" Dec 11 16:05:20 crc kubenswrapper[5120]: E1211 16:05:20.139771 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35\": container with ID starting with fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35 not found: ID does not exist" containerID="fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.139817 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35"} err="failed to get container status \"fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35\": rpc error: code = NotFound desc = could not find container \"fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35\": container with ID starting with fcc7f5d58dd45905087a0754a3664c22b40348063f4558d67a8e8d655b913d35 not found: ID does not exist" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.230951 5120 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.231024 5120 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:20 crc kubenswrapper[5120]: I1211 16:05:20.439413 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 11 16:05:21 crc kubenswrapper[5120]: I1211 16:05:21.026632 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 11 16:05:21 crc kubenswrapper[5120]: I1211 16:05:21.027774 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 11 16:05:28 crc kubenswrapper[5120]: I1211 16:05:28.645417 5120 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 16:05:28 crc kubenswrapper[5120]: I1211 16:05:28.717377 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:05:28 crc kubenswrapper[5120]: I1211 16:05:28.717670 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:05:28 crc kubenswrapper[5120]: I1211 16:05:28.717818 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:05:28 crc kubenswrapper[5120]: I1211 16:05:28.718594 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bded36918f78e5f934a2e80f529e76f291507fa4d302555d0d9666f63505ab7"} pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 16:05:28 crc kubenswrapper[5120]: I1211 16:05:28.718763 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" containerID="cri-o://8bded36918f78e5f934a2e80f529e76f291507fa4d302555d0d9666f63505ab7" gracePeriod=600 Dec 11 16:05:29 crc kubenswrapper[5120]: I1211 16:05:29.173216 5120 generic.go:358] "Generic (PLEG): container finished" podID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerID="8bded36918f78e5f934a2e80f529e76f291507fa4d302555d0d9666f63505ab7" exitCode=0 Dec 11 16:05:29 crc kubenswrapper[5120]: I1211 16:05:29.173481 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerDied","Data":"8bded36918f78e5f934a2e80f529e76f291507fa4d302555d0d9666f63505ab7"} Dec 11 16:05:29 crc kubenswrapper[5120]: I1211 16:05:29.173550 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"37cd81fdaf9b948884ac7f04fbf6a66e92e823688abb89ee1140d9a2b9d90eb4"} Dec 11 16:05:29 crc kubenswrapper[5120]: I1211 16:05:29.976487 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5r6br"] Dec 11 16:05:31 crc kubenswrapper[5120]: I1211 16:05:31.491016 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39030: no serving certificate available for the kubelet" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.625782 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9ljt5"] Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626596 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerName="extract-content" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626610 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerName="extract-content" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626619 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerName="extract-utilities" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626625 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerName="extract-utilities" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626634 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626641 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626652 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626659 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626670 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerName="extract-utilities" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626676 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerName="extract-utilities" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626684 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626689 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626699 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626704 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626714 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09d4a454-c53e-446e-9c58-ace5cef3d494" containerName="marketplace-operator" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626719 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d4a454-c53e-446e-9c58-ace5cef3d494" containerName="marketplace-operator" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626726 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" containerName="extract-content" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626731 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" containerName="extract-content" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626738 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626743 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626750 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerName="extract-content" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626759 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerName="extract-content" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626767 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerName="extract-utilities" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626773 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerName="extract-utilities" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626784 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerName="extract-content" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626790 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerName="extract-content" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626798 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" containerName="extract-utilities" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626803 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" containerName="extract-utilities" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626880 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6ca05d96-1ede-4860-abf0-dda71706ae45" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626889 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec8bc77d-3d21-4cd6-addb-2e5d7d4efb02" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626896 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4b9fbe5e-2046-431a-af21-9bfbbbecf32b" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626901 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d89b60e-d6e7-47df-898a-199387c5b767" containerName="registry-server" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626910 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.626919 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="09d4a454-c53e-446e-9c58-ace5cef3d494" containerName="marketplace-operator" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.630459 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.638283 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.643494 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ljt5"] Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.740083 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-catalog-content\") pod \"redhat-marketplace-9ljt5\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.740133 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-utilities\") pod \"redhat-marketplace-9ljt5\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.740556 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8cz\" (UniqueName: \"kubernetes.io/projected/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-kube-api-access-9p8cz\") pod \"redhat-marketplace-9ljt5\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.828350 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lsc8d"] Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.832026 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.836728 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.842291 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-catalog-content\") pod \"redhat-marketplace-9ljt5\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.842320 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-utilities\") pod \"redhat-marketplace-9ljt5\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.842474 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8cz\" (UniqueName: \"kubernetes.io/projected/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-kube-api-access-9p8cz\") pod \"redhat-marketplace-9ljt5\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.842936 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-utilities\") pod \"redhat-marketplace-9ljt5\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.843063 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-catalog-content\") pod \"redhat-marketplace-9ljt5\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.845666 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lsc8d"] Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.868037 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8cz\" (UniqueName: \"kubernetes.io/projected/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-kube-api-access-9p8cz\") pod \"redhat-marketplace-9ljt5\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.943253 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6882e55b-e236-46fb-aa91-b5595cd5c9ff-utilities\") pod \"community-operators-lsc8d\" (UID: \"6882e55b-e236-46fb-aa91-b5595cd5c9ff\") " pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.943641 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6882e55b-e236-46fb-aa91-b5595cd5c9ff-catalog-content\") pod \"community-operators-lsc8d\" (UID: \"6882e55b-e236-46fb-aa91-b5595cd5c9ff\") " pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.943797 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8cfb\" (UniqueName: \"kubernetes.io/projected/6882e55b-e236-46fb-aa91-b5595cd5c9ff-kube-api-access-m8cfb\") pod \"community-operators-lsc8d\" (UID: \"6882e55b-e236-46fb-aa91-b5595cd5c9ff\") " pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:40 crc kubenswrapper[5120]: I1211 16:05:40.956986 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.045092 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6882e55b-e236-46fb-aa91-b5595cd5c9ff-utilities\") pod \"community-operators-lsc8d\" (UID: \"6882e55b-e236-46fb-aa91-b5595cd5c9ff\") " pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.045194 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6882e55b-e236-46fb-aa91-b5595cd5c9ff-catalog-content\") pod \"community-operators-lsc8d\" (UID: \"6882e55b-e236-46fb-aa91-b5595cd5c9ff\") " pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.045222 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m8cfb\" (UniqueName: \"kubernetes.io/projected/6882e55b-e236-46fb-aa91-b5595cd5c9ff-kube-api-access-m8cfb\") pod \"community-operators-lsc8d\" (UID: \"6882e55b-e236-46fb-aa91-b5595cd5c9ff\") " pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.046064 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6882e55b-e236-46fb-aa91-b5595cd5c9ff-utilities\") pod \"community-operators-lsc8d\" (UID: \"6882e55b-e236-46fb-aa91-b5595cd5c9ff\") " pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.046310 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6882e55b-e236-46fb-aa91-b5595cd5c9ff-catalog-content\") pod \"community-operators-lsc8d\" (UID: \"6882e55b-e236-46fb-aa91-b5595cd5c9ff\") " pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.082293 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8cfb\" (UniqueName: \"kubernetes.io/projected/6882e55b-e236-46fb-aa91-b5595cd5c9ff-kube-api-access-m8cfb\") pod \"community-operators-lsc8d\" (UID: \"6882e55b-e236-46fb-aa91-b5595cd5c9ff\") " pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.145234 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.182941 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ljt5"] Dec 11 16:05:41 crc kubenswrapper[5120]: W1211 16:05:41.187504 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dbe1b0d_3e35_48b8_93e3_e1aa3665a781.slice/crio-e6dfbd92e60611900491f281107243ab4958650d58a49cbdfff5b6b4b0563613 WatchSource:0}: Error finding container e6dfbd92e60611900491f281107243ab4958650d58a49cbdfff5b6b4b0563613: Status 404 returned error can't find the container with id e6dfbd92e60611900491f281107243ab4958650d58a49cbdfff5b6b4b0563613 Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.236980 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ljt5" event={"ID":"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781","Type":"ContainerStarted","Data":"e6dfbd92e60611900491f281107243ab4958650d58a49cbdfff5b6b4b0563613"} Dec 11 16:05:41 crc kubenswrapper[5120]: I1211 16:05:41.545936 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lsc8d"] Dec 11 16:05:41 crc kubenswrapper[5120]: W1211 16:05:41.548504 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6882e55b_e236_46fb_aa91_b5595cd5c9ff.slice/crio-02fed5b0ca103f56e7a040cb5a89b29f83425cdbec426226dcfaab1cafad2c60 WatchSource:0}: Error finding container 02fed5b0ca103f56e7a040cb5a89b29f83425cdbec426226dcfaab1cafad2c60: Status 404 returned error can't find the container with id 02fed5b0ca103f56e7a040cb5a89b29f83425cdbec426226dcfaab1cafad2c60 Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.244832 5120 generic.go:358] "Generic (PLEG): container finished" podID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerID="15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95" exitCode=0 Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.244962 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ljt5" event={"ID":"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781","Type":"ContainerDied","Data":"15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95"} Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.246883 5120 generic.go:358] "Generic (PLEG): container finished" podID="6882e55b-e236-46fb-aa91-b5595cd5c9ff" containerID="5d77a581e72570abde0e256a2f505ec5235a9dc8d84fdac46d54e51b209e5191" exitCode=0 Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.246975 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsc8d" event={"ID":"6882e55b-e236-46fb-aa91-b5595cd5c9ff","Type":"ContainerDied","Data":"5d77a581e72570abde0e256a2f505ec5235a9dc8d84fdac46d54e51b209e5191"} Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.246996 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsc8d" event={"ID":"6882e55b-e236-46fb-aa91-b5595cd5c9ff","Type":"ContainerStarted","Data":"02fed5b0ca103f56e7a040cb5a89b29f83425cdbec426226dcfaab1cafad2c60"} Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.825851 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ldq28"] Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.829752 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.832582 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.839819 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldq28"] Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.865585 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c732c90-f4fb-41f9-94b7-2c3c907c5dc6-utilities\") pod \"certified-operators-ldq28\" (UID: \"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6\") " pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.865638 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c732c90-f4fb-41f9-94b7-2c3c907c5dc6-catalog-content\") pod \"certified-operators-ldq28\" (UID: \"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6\") " pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.865689 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxbp\" (UniqueName: \"kubernetes.io/projected/6c732c90-f4fb-41f9-94b7-2c3c907c5dc6-kube-api-access-zgxbp\") pod \"certified-operators-ldq28\" (UID: \"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6\") " pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.966791 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c732c90-f4fb-41f9-94b7-2c3c907c5dc6-catalog-content\") pod \"certified-operators-ldq28\" (UID: \"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6\") " pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.967243 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxbp\" (UniqueName: \"kubernetes.io/projected/6c732c90-f4fb-41f9-94b7-2c3c907c5dc6-kube-api-access-zgxbp\") pod \"certified-operators-ldq28\" (UID: \"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6\") " pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.967404 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c732c90-f4fb-41f9-94b7-2c3c907c5dc6-utilities\") pod \"certified-operators-ldq28\" (UID: \"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6\") " pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.967503 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c732c90-f4fb-41f9-94b7-2c3c907c5dc6-catalog-content\") pod \"certified-operators-ldq28\" (UID: \"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6\") " pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.967743 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c732c90-f4fb-41f9-94b7-2c3c907c5dc6-utilities\") pod \"certified-operators-ldq28\" (UID: \"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6\") " pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:42 crc kubenswrapper[5120]: I1211 16:05:42.990724 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxbp\" (UniqueName: \"kubernetes.io/projected/6c732c90-f4fb-41f9-94b7-2c3c907c5dc6-kube-api-access-zgxbp\") pod \"certified-operators-ldq28\" (UID: \"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6\") " pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.144954 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.255309 5120 generic.go:358] "Generic (PLEG): container finished" podID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerID="cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3" exitCode=0 Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.255629 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ljt5" event={"ID":"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781","Type":"ContainerDied","Data":"cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3"} Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.290638 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsc8d" event={"ID":"6882e55b-e236-46fb-aa91-b5595cd5c9ff","Type":"ContainerStarted","Data":"37f342caed4d843efb5e696ed300d022dc1de787e8bd94564e1680cb92545708"} Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.424923 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4zt8l"] Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.428689 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.433421 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4zt8l"] Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.433710 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.486459 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4zl\" (UniqueName: \"kubernetes.io/projected/3f6e9609-66b2-416f-a395-b4c947ac726c-kube-api-access-tx4zl\") pod \"redhat-operators-4zt8l\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.486518 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-catalog-content\") pod \"redhat-operators-4zt8l\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.486618 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-utilities\") pod \"redhat-operators-4zt8l\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.564978 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldq28"] Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.587299 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-utilities\") pod \"redhat-operators-4zt8l\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.587576 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4zl\" (UniqueName: \"kubernetes.io/projected/3f6e9609-66b2-416f-a395-b4c947ac726c-kube-api-access-tx4zl\") pod \"redhat-operators-4zt8l\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.587633 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-catalog-content\") pod \"redhat-operators-4zt8l\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.588108 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-catalog-content\") pod \"redhat-operators-4zt8l\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.588114 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-utilities\") pod \"redhat-operators-4zt8l\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.606340 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4zl\" (UniqueName: \"kubernetes.io/projected/3f6e9609-66b2-416f-a395-b4c947ac726c-kube-api-access-tx4zl\") pod \"redhat-operators-4zt8l\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:43 crc kubenswrapper[5120]: I1211 16:05:43.844972 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:44 crc kubenswrapper[5120]: I1211 16:05:44.248203 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4zt8l"] Dec 11 16:05:44 crc kubenswrapper[5120]: I1211 16:05:44.298207 5120 generic.go:358] "Generic (PLEG): container finished" podID="6c732c90-f4fb-41f9-94b7-2c3c907c5dc6" containerID="03982821c4ffcbe8fe49eccf7dcc060974587c9a7193279a57395bd4102034ad" exitCode=0 Dec 11 16:05:44 crc kubenswrapper[5120]: I1211 16:05:44.298517 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq28" event={"ID":"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6","Type":"ContainerDied","Data":"03982821c4ffcbe8fe49eccf7dcc060974587c9a7193279a57395bd4102034ad"} Dec 11 16:05:44 crc kubenswrapper[5120]: I1211 16:05:44.298562 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq28" event={"ID":"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6","Type":"ContainerStarted","Data":"d7435ce9aa73149bcd769040543e9fd045f9d25e157ce739df3e62f40cc28775"} Dec 11 16:05:44 crc kubenswrapper[5120]: I1211 16:05:44.303074 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ljt5" event={"ID":"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781","Type":"ContainerStarted","Data":"308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53"} Dec 11 16:05:44 crc kubenswrapper[5120]: I1211 16:05:44.309066 5120 generic.go:358] "Generic (PLEG): container finished" podID="6882e55b-e236-46fb-aa91-b5595cd5c9ff" containerID="37f342caed4d843efb5e696ed300d022dc1de787e8bd94564e1680cb92545708" exitCode=0 Dec 11 16:05:44 crc kubenswrapper[5120]: I1211 16:05:44.309794 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsc8d" event={"ID":"6882e55b-e236-46fb-aa91-b5595cd5c9ff","Type":"ContainerDied","Data":"37f342caed4d843efb5e696ed300d022dc1de787e8bd94564e1680cb92545708"} Dec 11 16:05:44 crc kubenswrapper[5120]: I1211 16:05:44.311456 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zt8l" event={"ID":"3f6e9609-66b2-416f-a395-b4c947ac726c","Type":"ContainerStarted","Data":"eef9d07f0d1f086de69c0c5dbe75e8dd77b2c7e13b269f2e13f052c85751a082"} Dec 11 16:05:44 crc kubenswrapper[5120]: I1211 16:05:44.360696 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9ljt5" podStartSLOduration=3.667567627 podStartE2EDuration="4.360673632s" podCreationTimestamp="2025-12-11 16:05:40 +0000 UTC" firstStartedPulling="2025-12-11 16:05:42.246028954 +0000 UTC m=+291.500332295" lastFinishedPulling="2025-12-11 16:05:42.939134969 +0000 UTC m=+292.193438300" observedRunningTime="2025-12-11 16:05:44.360510817 +0000 UTC m=+293.614814148" watchObservedRunningTime="2025-12-11 16:05:44.360673632 +0000 UTC m=+293.614976963" Dec 11 16:05:45 crc kubenswrapper[5120]: I1211 16:05:45.316656 5120 generic.go:358] "Generic (PLEG): container finished" podID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerID="fffc1e5e77c831b83d0cd822e66e3d047c5e147f38a1ed214870d0b23af8e307" exitCode=0 Dec 11 16:05:45 crc kubenswrapper[5120]: I1211 16:05:45.316758 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zt8l" event={"ID":"3f6e9609-66b2-416f-a395-b4c947ac726c","Type":"ContainerDied","Data":"fffc1e5e77c831b83d0cd822e66e3d047c5e147f38a1ed214870d0b23af8e307"} Dec 11 16:05:45 crc kubenswrapper[5120]: I1211 16:05:45.321117 5120 generic.go:358] "Generic (PLEG): container finished" podID="6c732c90-f4fb-41f9-94b7-2c3c907c5dc6" containerID="e38521cbdb62bff794bd488cebca735194c082fe8be00a1746d294df008cb485" exitCode=0 Dec 11 16:05:45 crc kubenswrapper[5120]: I1211 16:05:45.321248 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq28" event={"ID":"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6","Type":"ContainerDied","Data":"e38521cbdb62bff794bd488cebca735194c082fe8be00a1746d294df008cb485"} Dec 11 16:05:45 crc kubenswrapper[5120]: I1211 16:05:45.323017 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsc8d" event={"ID":"6882e55b-e236-46fb-aa91-b5595cd5c9ff","Type":"ContainerStarted","Data":"666b3c3c2642bd9d0de0a56ac4a0dacbc3c90d8561c75d1cdf5065ea1568cadf"} Dec 11 16:05:45 crc kubenswrapper[5120]: I1211 16:05:45.356696 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lsc8d" podStartSLOduration=4.566742821 podStartE2EDuration="5.356681405s" podCreationTimestamp="2025-12-11 16:05:40 +0000 UTC" firstStartedPulling="2025-12-11 16:05:42.24835849 +0000 UTC m=+291.502661861" lastFinishedPulling="2025-12-11 16:05:43.038297114 +0000 UTC m=+292.292600445" observedRunningTime="2025-12-11 16:05:45.355039969 +0000 UTC m=+294.609343300" watchObservedRunningTime="2025-12-11 16:05:45.356681405 +0000 UTC m=+294.610984736" Dec 11 16:05:46 crc kubenswrapper[5120]: I1211 16:05:46.329434 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zt8l" event={"ID":"3f6e9609-66b2-416f-a395-b4c947ac726c","Type":"ContainerStarted","Data":"29e378189a71521670b0c306777e44f35c54eab04342a04cd84dc69283738897"} Dec 11 16:05:46 crc kubenswrapper[5120]: I1211 16:05:46.332171 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldq28" event={"ID":"6c732c90-f4fb-41f9-94b7-2c3c907c5dc6","Type":"ContainerStarted","Data":"dffbab15f74346dc976fb2130d754f1e88ee4bd7aa097b10f096647472bcd21e"} Dec 11 16:05:46 crc kubenswrapper[5120]: I1211 16:05:46.368587 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ldq28" podStartSLOduration=3.712791643 podStartE2EDuration="4.368574839s" podCreationTimestamp="2025-12-11 16:05:42 +0000 UTC" firstStartedPulling="2025-12-11 16:05:44.299458804 +0000 UTC m=+293.553762145" lastFinishedPulling="2025-12-11 16:05:44.95524201 +0000 UTC m=+294.209545341" observedRunningTime="2025-12-11 16:05:46.367462007 +0000 UTC m=+295.621765338" watchObservedRunningTime="2025-12-11 16:05:46.368574839 +0000 UTC m=+295.622878160" Dec 11 16:05:47 crc kubenswrapper[5120]: I1211 16:05:47.339492 5120 generic.go:358] "Generic (PLEG): container finished" podID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerID="29e378189a71521670b0c306777e44f35c54eab04342a04cd84dc69283738897" exitCode=0 Dec 11 16:05:47 crc kubenswrapper[5120]: I1211 16:05:47.339551 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zt8l" event={"ID":"3f6e9609-66b2-416f-a395-b4c947ac726c","Type":"ContainerDied","Data":"29e378189a71521670b0c306777e44f35c54eab04342a04cd84dc69283738897"} Dec 11 16:05:48 crc kubenswrapper[5120]: I1211 16:05:48.358107 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zt8l" event={"ID":"3f6e9609-66b2-416f-a395-b4c947ac726c","Type":"ContainerStarted","Data":"9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124"} Dec 11 16:05:48 crc kubenswrapper[5120]: I1211 16:05:48.384485 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4zt8l" podStartSLOduration=4.829679876 podStartE2EDuration="5.384465584s" podCreationTimestamp="2025-12-11 16:05:43 +0000 UTC" firstStartedPulling="2025-12-11 16:05:45.317501773 +0000 UTC m=+294.571805104" lastFinishedPulling="2025-12-11 16:05:45.872287481 +0000 UTC m=+295.126590812" observedRunningTime="2025-12-11 16:05:48.378124774 +0000 UTC m=+297.632428155" watchObservedRunningTime="2025-12-11 16:05:48.384465584 +0000 UTC m=+297.638768925" Dec 11 16:05:50 crc kubenswrapper[5120]: I1211 16:05:50.957986 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:50 crc kubenswrapper[5120]: I1211 16:05:50.958349 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:50 crc kubenswrapper[5120]: I1211 16:05:50.999816 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:51 crc kubenswrapper[5120]: I1211 16:05:51.136388 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:05:51 crc kubenswrapper[5120]: I1211 16:05:51.141238 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:05:51 crc kubenswrapper[5120]: I1211 16:05:51.146229 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:51 crc kubenswrapper[5120]: I1211 16:05:51.148359 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:51 crc kubenswrapper[5120]: I1211 16:05:51.196817 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:51 crc kubenswrapper[5120]: I1211 16:05:51.410953 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:05:51 crc kubenswrapper[5120]: I1211 16:05:51.413564 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lsc8d" Dec 11 16:05:53 crc kubenswrapper[5120]: I1211 16:05:53.145189 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:53 crc kubenswrapper[5120]: I1211 16:05:53.145529 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:53 crc kubenswrapper[5120]: I1211 16:05:53.200938 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:53 crc kubenswrapper[5120]: I1211 16:05:53.415522 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ldq28" Dec 11 16:05:53 crc kubenswrapper[5120]: I1211 16:05:53.845584 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:53 crc kubenswrapper[5120]: I1211 16:05:53.845935 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:53 crc kubenswrapper[5120]: I1211 16:05:53.878843 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:54 crc kubenswrapper[5120]: I1211 16:05:54.428663 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:05:55 crc kubenswrapper[5120]: I1211 16:05:55.017196 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" podUID="dddece50-dbb3-4cd0-9102-15371396ab49" containerName="oauth-openshift" containerID="cri-o://26e6cc7ba408ad86b374acd503dfd7d94e7b5e3942fe8cd753a45abf152628cc" gracePeriod=15 Dec 11 16:05:57 crc kubenswrapper[5120]: I1211 16:05:57.409071 5120 generic.go:358] "Generic (PLEG): container finished" podID="dddece50-dbb3-4cd0-9102-15371396ab49" containerID="26e6cc7ba408ad86b374acd503dfd7d94e7b5e3942fe8cd753a45abf152628cc" exitCode=0 Dec 11 16:05:57 crc kubenswrapper[5120]: I1211 16:05:57.409141 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" event={"ID":"dddece50-dbb3-4cd0-9102-15371396ab49","Type":"ContainerDied","Data":"26e6cc7ba408ad86b374acd503dfd7d94e7b5e3942fe8cd753a45abf152628cc"} Dec 11 16:05:57 crc kubenswrapper[5120]: I1211 16:05:57.541284 5120 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-5r6br container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Dec 11 16:05:57 crc kubenswrapper[5120]: I1211 16:05:57.541366 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" podUID="dddece50-dbb3-4cd0-9102-15371396ab49" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.154130 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.181517 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55b6456b9c-wgcch"] Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.182039 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dddece50-dbb3-4cd0-9102-15371396ab49" containerName="oauth-openshift" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.182059 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dddece50-dbb3-4cd0-9102-15371396ab49" containerName="oauth-openshift" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.182184 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dddece50-dbb3-4cd0-9102-15371396ab49" containerName="oauth-openshift" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.252620 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-cliconfig\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.252703 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-session\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.252733 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dddece50-dbb3-4cd0-9102-15371396ab49-audit-dir\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.252754 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-provider-selection\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.252850 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dddece50-dbb3-4cd0-9102-15371396ab49-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.252930 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-error\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253005 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-trusted-ca-bundle\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253049 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-router-certs\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253089 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-login\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253121 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-ocp-branding-template\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253162 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-audit-policies\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253182 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-idp-0-file-data\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253210 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvctm\" (UniqueName: \"kubernetes.io/projected/dddece50-dbb3-4cd0-9102-15371396ab49-kube-api-access-fvctm\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-serving-cert\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253267 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-service-ca\") pod \"dddece50-dbb3-4cd0-9102-15371396ab49\" (UID: \"dddece50-dbb3-4cd0-9102-15371396ab49\") " Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253608 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253658 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253884 5120 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dddece50-dbb3-4cd0-9102-15371396ab49-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253905 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.253917 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.254066 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.254295 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.258827 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.259253 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.258994 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dddece50-dbb3-4cd0-9102-15371396ab49-kube-api-access-fvctm" (OuterVolumeSpecName: "kube-api-access-fvctm") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "kube-api-access-fvctm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.259450 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.259584 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.259814 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.264847 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.265595 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.265872 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "dddece50-dbb3-4cd0-9102-15371396ab49" (UID: "dddece50-dbb3-4cd0-9102-15371396ab49"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354743 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354780 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354792 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354802 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354811 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354823 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354834 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354847 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fvctm\" (UniqueName: \"kubernetes.io/projected/dddece50-dbb3-4cd0-9102-15371396ab49-kube-api-access-fvctm\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354855 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354864 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.354873 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dddece50-dbb3-4cd0-9102-15371396ab49-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.886326 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55b6456b9c-wgcch"] Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.886628 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.886635 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" event={"ID":"dddece50-dbb3-4cd0-9102-15371396ab49","Type":"ContainerDied","Data":"43a1ab65cf57a58fa3501bda1d8e7e2877860ba335f17c22d2ab43a35d648908"} Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.886852 5120 scope.go:117] "RemoveContainer" containerID="26e6cc7ba408ad86b374acd503dfd7d94e7b5e3942fe8cd753a45abf152628cc" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.886464 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5r6br" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.934220 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5r6br"] Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.939089 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5r6br"] Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.962907 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963031 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-service-ca\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963167 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-audit-policies\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963226 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-template-error\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963273 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-router-certs\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963316 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963336 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963362 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963382 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4651af45-84f7-428c-85d2-c546b3c64d40-audit-dir\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963417 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnvl9\" (UniqueName: \"kubernetes.io/projected/4651af45-84f7-428c-85d2-c546b3c64d40-kube-api-access-tnvl9\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963440 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963470 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963518 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-session\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:58 crc kubenswrapper[5120]: I1211 16:05:58.963602 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-template-login\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.033785 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dddece50-dbb3-4cd0-9102-15371396ab49" path="/var/lib/kubelet/pods/dddece50-dbb3-4cd0-9102-15371396ab49/volumes" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.064955 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.064998 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-service-ca\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065036 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-audit-policies\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065052 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-template-error\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065074 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-router-certs\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065093 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065108 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065127 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065179 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4651af45-84f7-428c-85d2-c546b3c64d40-audit-dir\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065221 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tnvl9\" (UniqueName: \"kubernetes.io/projected/4651af45-84f7-428c-85d2-c546b3c64d40-kube-api-access-tnvl9\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065240 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065260 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065283 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-session\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065311 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-template-login\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.065410 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4651af45-84f7-428c-85d2-c546b3c64d40-audit-dir\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.066101 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-service-ca\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.066123 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-audit-policies\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.067415 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.067591 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.070458 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-template-error\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.070578 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-template-login\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.070792 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-router-certs\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.071297 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.072357 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.072582 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.075385 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-system-session\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.076852 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4651af45-84f7-428c-85d2-c546b3c64d40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.091440 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnvl9\" (UniqueName: \"kubernetes.io/projected/4651af45-84f7-428c-85d2-c546b3c64d40-kube-api-access-tnvl9\") pod \"oauth-openshift-55b6456b9c-wgcch\" (UID: \"4651af45-84f7-428c-85d2-c546b3c64d40\") " pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.212633 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.646592 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55b6456b9c-wgcch"] Dec 11 16:05:59 crc kubenswrapper[5120]: I1211 16:05:59.653898 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 16:06:00 crc kubenswrapper[5120]: I1211 16:06:00.435624 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" event={"ID":"4651af45-84f7-428c-85d2-c546b3c64d40","Type":"ContainerStarted","Data":"d9d99cf68b40bb1e2e846dde0a9864079105e666c680e9930919c1f1f8f49e27"} Dec 11 16:06:00 crc kubenswrapper[5120]: I1211 16:06:00.436059 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" event={"ID":"4651af45-84f7-428c-85d2-c546b3c64d40","Type":"ContainerStarted","Data":"f8d59fe870a04c88c1d8ec7421e3cc401f01547dc62cac60962e0b3b3502d30b"} Dec 11 16:06:00 crc kubenswrapper[5120]: I1211 16:06:00.436103 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:06:00 crc kubenswrapper[5120]: I1211 16:06:00.460559 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" podStartSLOduration=31.460539894 podStartE2EDuration="31.460539894s" podCreationTimestamp="2025-12-11 16:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:06:00.456579162 +0000 UTC m=+309.710882543" watchObservedRunningTime="2025-12-11 16:06:00.460539894 +0000 UTC m=+309.714843245" Dec 11 16:06:00 crc kubenswrapper[5120]: I1211 16:06:00.753541 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55b6456b9c-wgcch" Dec 11 16:07:28 crc kubenswrapper[5120]: I1211 16:07:28.718253 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:07:28 crc kubenswrapper[5120]: I1211 16:07:28.718903 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:07:58 crc kubenswrapper[5120]: I1211 16:07:58.718436 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:07:58 crc kubenswrapper[5120]: I1211 16:07:58.719030 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:08:28 crc kubenswrapper[5120]: I1211 16:08:28.717744 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:08:28 crc kubenswrapper[5120]: I1211 16:08:28.718293 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:08:28 crc kubenswrapper[5120]: I1211 16:08:28.718371 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:08:28 crc kubenswrapper[5120]: I1211 16:08:28.719584 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37cd81fdaf9b948884ac7f04fbf6a66e92e823688abb89ee1140d9a2b9d90eb4"} pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 16:08:28 crc kubenswrapper[5120]: I1211 16:08:28.719672 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" containerID="cri-o://37cd81fdaf9b948884ac7f04fbf6a66e92e823688abb89ee1140d9a2b9d90eb4" gracePeriod=600 Dec 11 16:08:29 crc kubenswrapper[5120]: I1211 16:08:29.268978 5120 generic.go:358] "Generic (PLEG): container finished" podID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerID="37cd81fdaf9b948884ac7f04fbf6a66e92e823688abb89ee1140d9a2b9d90eb4" exitCode=0 Dec 11 16:08:29 crc kubenswrapper[5120]: I1211 16:08:29.269188 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerDied","Data":"37cd81fdaf9b948884ac7f04fbf6a66e92e823688abb89ee1140d9a2b9d90eb4"} Dec 11 16:08:29 crc kubenswrapper[5120]: I1211 16:08:29.269373 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"a09fb695df5d1b3ee680128c4cd59d89388d5ef467e74023daef155b556f17c3"} Dec 11 16:08:29 crc kubenswrapper[5120]: I1211 16:08:29.269395 5120 scope.go:117] "RemoveContainer" containerID="8bded36918f78e5f934a2e80f529e76f291507fa4d302555d0d9666f63505ab7" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.368229 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br"] Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.369010 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" podUID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerName="kube-rbac-proxy" containerID="cri-o://d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.369412 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" podUID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerName="ovnkube-cluster-manager" containerID="cri-o://9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.568697 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxt85"] Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.569284 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovn-controller" containerID="cri-o://4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.569476 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="sbdb" containerID="cri-o://e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.569522 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="nbdb" containerID="cri-o://76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.569559 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="northd" containerID="cri-o://ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.569598 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.569646 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovn-acl-logging" containerID="cri-o://b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.569788 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kube-rbac-proxy-node" containerID="cri-o://c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.594216 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovnkube-controller" containerID="cri-o://fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703" gracePeriod=30 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.721621 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.749915 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t"] Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.750450 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerName="ovnkube-cluster-manager" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.750469 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerName="ovnkube-cluster-manager" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.750487 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerName="kube-rbac-proxy" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.750493 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerName="kube-rbac-proxy" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.750586 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerName="kube-rbac-proxy" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.750596 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerName="ovnkube-cluster-manager" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.753557 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.814411 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-env-overrides\") pod \"38cdad44-c229-4500-b4e7-92c3cafb0974\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.814469 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmp8v\" (UniqueName: \"kubernetes.io/projected/38cdad44-c229-4500-b4e7-92c3cafb0974-kube-api-access-kmp8v\") pod \"38cdad44-c229-4500-b4e7-92c3cafb0974\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.814509 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-ovnkube-config\") pod \"38cdad44-c229-4500-b4e7-92c3cafb0974\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.814660 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38cdad44-c229-4500-b4e7-92c3cafb0974-ovn-control-plane-metrics-cert\") pod \"38cdad44-c229-4500-b4e7-92c3cafb0974\" (UID: \"38cdad44-c229-4500-b4e7-92c3cafb0974\") " Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.815420 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "38cdad44-c229-4500-b4e7-92c3cafb0974" (UID: "38cdad44-c229-4500-b4e7-92c3cafb0974"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.815436 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "38cdad44-c229-4500-b4e7-92c3cafb0974" (UID: "38cdad44-c229-4500-b4e7-92c3cafb0974"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.821595 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38cdad44-c229-4500-b4e7-92c3cafb0974-kube-api-access-kmp8v" (OuterVolumeSpecName: "kube-api-access-kmp8v") pod "38cdad44-c229-4500-b4e7-92c3cafb0974" (UID: "38cdad44-c229-4500-b4e7-92c3cafb0974"). InnerVolumeSpecName "kube-api-access-kmp8v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.821640 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38cdad44-c229-4500-b4e7-92c3cafb0974-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "38cdad44-c229-4500-b4e7-92c3cafb0974" (UID: "38cdad44-c229-4500-b4e7-92c3cafb0974"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.839905 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxt85_d25df28f-e707-49ec-a539-9f1d1b40a297/ovn-acl-logging/0.log" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.840480 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxt85_d25df28f-e707-49ec-a539-9f1d1b40a297/ovn-controller/0.log" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.840877 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.880704 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sgcxk"] Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881224 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kubecfg-setup" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881238 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kubecfg-setup" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881248 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovnkube-controller" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881255 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovnkube-controller" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881270 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="northd" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881276 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="northd" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881282 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="sbdb" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881287 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="sbdb" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881298 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881304 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881317 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovn-controller" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881322 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovn-controller" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881331 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="nbdb" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881336 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="nbdb" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881343 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovn-acl-logging" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881348 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovn-acl-logging" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881355 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kube-rbac-proxy-node" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881360 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kube-rbac-proxy-node" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881551 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kube-rbac-proxy-node" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881565 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovn-controller" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881573 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="northd" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881584 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovn-acl-logging" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881596 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="nbdb" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881603 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881610 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="sbdb" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.881618 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerName="ovnkube-controller" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.885910 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.917487 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3ce49410-77ef-43ed-aac8-1865d83d52b6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.917565 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3ce49410-77ef-43ed-aac8-1865d83d52b6-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.917639 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3ce49410-77ef-43ed-aac8-1865d83d52b6-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.917745 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf9qj\" (UniqueName: \"kubernetes.io/projected/3ce49410-77ef-43ed-aac8-1865d83d52b6-kube-api-access-rf9qj\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.917898 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.917916 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kmp8v\" (UniqueName: \"kubernetes.io/projected/38cdad44-c229-4500-b4e7-92c3cafb0974-kube-api-access-kmp8v\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.917931 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38cdad44-c229-4500-b4e7-92c3cafb0974-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.918074 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38cdad44-c229-4500-b4e7-92c3cafb0974-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.924231 5120 generic.go:358] "Generic (PLEG): container finished" podID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerID="9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f" exitCode=0 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.924266 5120 generic.go:358] "Generic (PLEG): container finished" podID="38cdad44-c229-4500-b4e7-92c3cafb0974" containerID="d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e" exitCode=0 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.924326 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.924379 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" event={"ID":"38cdad44-c229-4500-b4e7-92c3cafb0974","Type":"ContainerDied","Data":"9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.924411 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" event={"ID":"38cdad44-c229-4500-b4e7-92c3cafb0974","Type":"ContainerDied","Data":"d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.924426 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br" event={"ID":"38cdad44-c229-4500-b4e7-92c3cafb0974","Type":"ContainerDied","Data":"fc45727ed5c2d8d65b34cfac06d3e49a46bd823b9502d3f8b0d727a70ee20359"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.924446 5120 scope.go:117] "RemoveContainer" containerID="9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.929510 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxt85_d25df28f-e707-49ec-a539-9f1d1b40a297/ovn-acl-logging/0.log" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930169 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxt85_d25df28f-e707-49ec-a539-9f1d1b40a297/ovn-controller/0.log" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930504 5120 generic.go:358] "Generic (PLEG): container finished" podID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerID="fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703" exitCode=0 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930525 5120 generic.go:358] "Generic (PLEG): container finished" podID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerID="e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76" exitCode=0 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930535 5120 generic.go:358] "Generic (PLEG): container finished" podID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerID="76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d" exitCode=0 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930542 5120 generic.go:358] "Generic (PLEG): container finished" podID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerID="ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df" exitCode=0 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930550 5120 generic.go:358] "Generic (PLEG): container finished" podID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerID="988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c" exitCode=0 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930557 5120 generic.go:358] "Generic (PLEG): container finished" podID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerID="c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33" exitCode=0 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930565 5120 generic.go:358] "Generic (PLEG): container finished" podID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerID="b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281" exitCode=143 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930572 5120 generic.go:358] "Generic (PLEG): container finished" podID="d25df28f-e707-49ec-a539-9f1d1b40a297" containerID="4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29" exitCode=143 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930670 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930692 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930709 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930722 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930735 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930747 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930759 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930770 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930778 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930785 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930791 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930798 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930805 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930811 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930818 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930827 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930837 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930845 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930852 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930858 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930865 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930872 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930878 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930885 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930891 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930900 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930913 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930920 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930927 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930933 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930940 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930958 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930965 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930971 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930977 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930988 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" event={"ID":"d25df28f-e707-49ec-a539-9f1d1b40a297","Type":"ContainerDied","Data":"69f1b544aeee78b0ad6657cc245609d0a90d9bac27f26c85945bc150eac13fee"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.930998 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.931006 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.931013 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.931020 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.931027 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.931034 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.931040 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.931047 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.931054 5120 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.931234 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxt85" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.937257 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.937301 5120 generic.go:358] "Generic (PLEG): container finished" podID="7143452f-c193-4dbf-872c-a3ae9245f158" containerID="5ab84bc58f9b4e30ab36c3bd52b5c52d1fbb38194251ce2275df6f68ea13b270" exitCode=2 Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.937363 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzwn6" event={"ID":"7143452f-c193-4dbf-872c-a3ae9245f158","Type":"ContainerDied","Data":"5ab84bc58f9b4e30ab36c3bd52b5c52d1fbb38194251ce2275df6f68ea13b270"} Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.938355 5120 scope.go:117] "RemoveContainer" containerID="5ab84bc58f9b4e30ab36c3bd52b5c52d1fbb38194251ce2275df6f68ea13b270" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.946021 5120 scope.go:117] "RemoveContainer" containerID="d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.969634 5120 scope.go:117] "RemoveContainer" containerID="9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f" Dec 11 16:10:11 crc kubenswrapper[5120]: E1211 16:10:11.970423 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f\": container with ID starting with 9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f not found: ID does not exist" containerID="9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.970478 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f"} err="failed to get container status \"9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f\": rpc error: code = NotFound desc = could not find container \"9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f\": container with ID starting with 9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f not found: ID does not exist" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.970510 5120 scope.go:117] "RemoveContainer" containerID="d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e" Dec 11 16:10:11 crc kubenswrapper[5120]: E1211 16:10:11.971651 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e\": container with ID starting with d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e not found: ID does not exist" containerID="d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.971705 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e"} err="failed to get container status \"d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e\": rpc error: code = NotFound desc = could not find container \"d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e\": container with ID starting with d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e not found: ID does not exist" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.971754 5120 scope.go:117] "RemoveContainer" containerID="9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.972290 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f"} err="failed to get container status \"9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f\": rpc error: code = NotFound desc = could not find container \"9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f\": container with ID starting with 9b3c6bb9699badc81bd7aec34a328e3c3f763f0f01c64c2be7f987eb9850a37f not found: ID does not exist" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.972323 5120 scope.go:117] "RemoveContainer" containerID="d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.972637 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e"} err="failed to get container status \"d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e\": rpc error: code = NotFound desc = could not find container \"d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e\": container with ID starting with d06e83d80c0a75a8a0652b12c5028820065a46a2f57aa135bc15a79ca8cf6b1e not found: ID does not exist" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.972670 5120 scope.go:117] "RemoveContainer" containerID="fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703" Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.972740 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br"] Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.978045 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-j58br"] Dec 11 16:10:11 crc kubenswrapper[5120]: I1211 16:10:11.991592 5120 scope.go:117] "RemoveContainer" containerID="e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.007877 5120 scope.go:117] "RemoveContainer" containerID="76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019347 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-openvswitch\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019395 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019425 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-var-lib-openvswitch\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019447 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-bin\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019475 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-netns\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019486 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019519 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019549 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019544 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-script-lib\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019563 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019676 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-systemd-units\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019718 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-ovn\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019592 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019720 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019796 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-config\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019821 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-systemd\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019840 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019882 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-kubelet\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019921 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-node-log\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019963 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019948 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-542ds\" (UniqueName: \"kubernetes.io/projected/d25df28f-e707-49ec-a539-9f1d1b40a297-kube-api-access-542ds\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.019992 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-node-log" (OuterVolumeSpecName: "node-log") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020070 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-env-overrides\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020130 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-etc-openvswitch\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020247 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-log-socket\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020251 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020313 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-ovn-kubernetes\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020326 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-log-socket" (OuterVolumeSpecName: "log-socket") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020361 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-slash\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020397 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-netd\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020437 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-slash" (OuterVolumeSpecName: "host-slash") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020461 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020519 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020452 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d25df28f-e707-49ec-a539-9f1d1b40a297-ovn-node-metrics-cert\") pod \"d25df28f-e707-49ec-a539-9f1d1b40a297\" (UID: \"d25df28f-e707-49ec-a539-9f1d1b40a297\") " Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020620 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020646 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020672 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020888 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rf9qj\" (UniqueName: \"kubernetes.io/projected/3ce49410-77ef-43ed-aac8-1865d83d52b6-kube-api-access-rf9qj\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020924 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-etc-openvswitch\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020946 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.020978 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpxct\" (UniqueName: \"kubernetes.io/projected/8b8ad823-fd58-4438-8d21-7e8cbe20252e-kube-api-access-zpxct\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021037 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-node-log\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021057 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-cni-netd\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021084 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-run-systemd\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021100 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-run-netns\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021244 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021403 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-log-socket\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021490 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8b8ad823-fd58-4438-8d21-7e8cbe20252e-env-overrides\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021534 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-slash\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021591 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-systemd-units\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021642 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3ce49410-77ef-43ed-aac8-1865d83d52b6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021788 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3ce49410-77ef-43ed-aac8-1865d83d52b6-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021856 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8b8ad823-fd58-4438-8d21-7e8cbe20252e-ovnkube-config\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.021904 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-var-lib-openvswitch\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022032 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-kubelet\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022109 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3ce49410-77ef-43ed-aac8-1865d83d52b6-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022193 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-run-ovn\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022226 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-run-openvswitch\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022253 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-cni-bin\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022391 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8b8ad823-fd58-4438-8d21-7e8cbe20252e-ovn-node-metrics-cert\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8b8ad823-fd58-4438-8d21-7e8cbe20252e-ovnkube-script-lib\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022654 5120 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022668 5120 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022678 5120 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022689 5120 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022698 5120 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022709 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022717 5120 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022725 5120 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022733 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022744 5120 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022767 5120 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-node-log\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022779 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d25df28f-e707-49ec-a539-9f1d1b40a297-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022790 5120 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022798 5120 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-log-socket\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022807 5120 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022832 5120 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-slash\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.022840 5120 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.023068 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3ce49410-77ef-43ed-aac8-1865d83d52b6-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.023454 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3ce49410-77ef-43ed-aac8-1865d83d52b6-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.026329 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25df28f-e707-49ec-a539-9f1d1b40a297-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.027828 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3ce49410-77ef-43ed-aac8-1865d83d52b6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.032380 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25df28f-e707-49ec-a539-9f1d1b40a297-kube-api-access-542ds" (OuterVolumeSpecName: "kube-api-access-542ds") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "kube-api-access-542ds". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.036719 5120 scope.go:117] "RemoveContainer" containerID="ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.038805 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d25df28f-e707-49ec-a539-9f1d1b40a297" (UID: "d25df28f-e707-49ec-a539-9f1d1b40a297"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.042404 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf9qj\" (UniqueName: \"kubernetes.io/projected/3ce49410-77ef-43ed-aac8-1865d83d52b6-kube-api-access-rf9qj\") pod \"ovnkube-control-plane-97c9b6c48-64x8t\" (UID: \"3ce49410-77ef-43ed-aac8-1865d83d52b6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.050688 5120 scope.go:117] "RemoveContainer" containerID="988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.065812 5120 scope.go:117] "RemoveContainer" containerID="c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.084455 5120 scope.go:117] "RemoveContainer" containerID="b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.098714 5120 scope.go:117] "RemoveContainer" containerID="4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.113063 5120 scope.go:117] "RemoveContainer" containerID="d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124040 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-systemd-units\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124208 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-systemd-units\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124170 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8b8ad823-fd58-4438-8d21-7e8cbe20252e-ovnkube-config\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124345 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-var-lib-openvswitch\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124404 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-kubelet\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124439 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-var-lib-openvswitch\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124463 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-run-ovn\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124505 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-run-ovn\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124530 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-run-openvswitch\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124563 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-cni-bin\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124603 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8b8ad823-fd58-4438-8d21-7e8cbe20252e-ovn-node-metrics-cert\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124640 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8b8ad823-fd58-4438-8d21-7e8cbe20252e-ovnkube-script-lib\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124693 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-etc-openvswitch\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124727 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124759 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zpxct\" (UniqueName: \"kubernetes.io/projected/8b8ad823-fd58-4438-8d21-7e8cbe20252e-kube-api-access-zpxct\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124922 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-kubelet\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124961 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8b8ad823-fd58-4438-8d21-7e8cbe20252e-ovnkube-config\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124988 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-cni-bin\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125018 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125035 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-run-openvswitch\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125205 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-node-log\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125209 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-etc-openvswitch\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.124847 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-node-log\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125757 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-cni-netd\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125795 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-run-systemd\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125832 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-run-netns\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125872 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-cni-netd\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125909 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-run-systemd\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.125999 5120 scope.go:117] "RemoveContainer" containerID="fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126048 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-run-netns\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126126 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126223 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-run-ovn-kubernetes\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126267 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-log-socket\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126356 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8b8ad823-fd58-4438-8d21-7e8cbe20252e-env-overrides\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126421 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-log-socket\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126431 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-slash\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126460 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8b8ad823-fd58-4438-8d21-7e8cbe20252e-ovnkube-script-lib\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8b8ad823-fd58-4438-8d21-7e8cbe20252e-host-slash\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126524 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d25df28f-e707-49ec-a539-9f1d1b40a297-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126545 5120 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d25df28f-e707-49ec-a539-9f1d1b40a297-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.126557 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-542ds\" (UniqueName: \"kubernetes.io/projected/d25df28f-e707-49ec-a539-9f1d1b40a297-kube-api-access-542ds\") on node \"crc\" DevicePath \"\"" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.127270 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8b8ad823-fd58-4438-8d21-7e8cbe20252e-env-overrides\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: E1211 16:10:12.127459 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703\": container with ID starting with fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703 not found: ID does not exist" containerID="fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.127507 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} err="failed to get container status \"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703\": rpc error: code = NotFound desc = could not find container \"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703\": container with ID starting with fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.127543 5120 scope.go:117] "RemoveContainer" containerID="e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.129261 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8b8ad823-fd58-4438-8d21-7e8cbe20252e-ovn-node-metrics-cert\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: E1211 16:10:12.129582 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76\": container with ID starting with e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76 not found: ID does not exist" containerID="e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.129653 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} err="failed to get container status \"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76\": rpc error: code = NotFound desc = could not find container \"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76\": container with ID starting with e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.129683 5120 scope.go:117] "RemoveContainer" containerID="76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d" Dec 11 16:10:12 crc kubenswrapper[5120]: E1211 16:10:12.133412 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d\": container with ID starting with 76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d not found: ID does not exist" containerID="76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.133456 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} err="failed to get container status \"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d\": rpc error: code = NotFound desc = could not find container \"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d\": container with ID starting with 76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.133482 5120 scope.go:117] "RemoveContainer" containerID="ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df" Dec 11 16:10:12 crc kubenswrapper[5120]: E1211 16:10:12.133858 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df\": container with ID starting with ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df not found: ID does not exist" containerID="ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.133890 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} err="failed to get container status \"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df\": rpc error: code = NotFound desc = could not find container \"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df\": container with ID starting with ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.133917 5120 scope.go:117] "RemoveContainer" containerID="988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c" Dec 11 16:10:12 crc kubenswrapper[5120]: E1211 16:10:12.134236 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c\": container with ID starting with 988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c not found: ID does not exist" containerID="988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.134267 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} err="failed to get container status \"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c\": rpc error: code = NotFound desc = could not find container \"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c\": container with ID starting with 988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.134288 5120 scope.go:117] "RemoveContainer" containerID="c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33" Dec 11 16:10:12 crc kubenswrapper[5120]: E1211 16:10:12.134611 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33\": container with ID starting with c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33 not found: ID does not exist" containerID="c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.134651 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} err="failed to get container status \"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33\": rpc error: code = NotFound desc = could not find container \"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33\": container with ID starting with c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.134677 5120 scope.go:117] "RemoveContainer" containerID="b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.134634 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" Dec 11 16:10:12 crc kubenswrapper[5120]: E1211 16:10:12.134919 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281\": container with ID starting with b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281 not found: ID does not exist" containerID="b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.134965 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} err="failed to get container status \"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281\": rpc error: code = NotFound desc = could not find container \"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281\": container with ID starting with b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.134990 5120 scope.go:117] "RemoveContainer" containerID="4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29" Dec 11 16:10:12 crc kubenswrapper[5120]: E1211 16:10:12.135316 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29\": container with ID starting with 4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29 not found: ID does not exist" containerID="4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.135361 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} err="failed to get container status \"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29\": rpc error: code = NotFound desc = could not find container \"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29\": container with ID starting with 4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.135386 5120 scope.go:117] "RemoveContainer" containerID="d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4" Dec 11 16:10:12 crc kubenswrapper[5120]: E1211 16:10:12.135669 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4\": container with ID starting with d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4 not found: ID does not exist" containerID="d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.135727 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4"} err="failed to get container status \"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4\": rpc error: code = NotFound desc = could not find container \"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4\": container with ID starting with d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.135760 5120 scope.go:117] "RemoveContainer" containerID="fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.136099 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} err="failed to get container status \"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703\": rpc error: code = NotFound desc = could not find container \"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703\": container with ID starting with fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.136140 5120 scope.go:117] "RemoveContainer" containerID="e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.136420 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} err="failed to get container status \"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76\": rpc error: code = NotFound desc = could not find container \"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76\": container with ID starting with e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.136455 5120 scope.go:117] "RemoveContainer" containerID="76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.136753 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} err="failed to get container status \"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d\": rpc error: code = NotFound desc = could not find container \"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d\": container with ID starting with 76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.136788 5120 scope.go:117] "RemoveContainer" containerID="ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.137115 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} err="failed to get container status \"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df\": rpc error: code = NotFound desc = could not find container \"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df\": container with ID starting with ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.137194 5120 scope.go:117] "RemoveContainer" containerID="988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.137477 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} err="failed to get container status \"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c\": rpc error: code = NotFound desc = could not find container \"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c\": container with ID starting with 988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.137519 5120 scope.go:117] "RemoveContainer" containerID="c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.137758 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} err="failed to get container status \"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33\": rpc error: code = NotFound desc = could not find container \"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33\": container with ID starting with c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.137791 5120 scope.go:117] "RemoveContainer" containerID="b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.138008 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} err="failed to get container status \"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281\": rpc error: code = NotFound desc = could not find container \"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281\": container with ID starting with b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.138045 5120 scope.go:117] "RemoveContainer" containerID="4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.138390 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} err="failed to get container status \"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29\": rpc error: code = NotFound desc = could not find container \"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29\": container with ID starting with 4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.138429 5120 scope.go:117] "RemoveContainer" containerID="d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.138706 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4"} err="failed to get container status \"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4\": rpc error: code = NotFound desc = could not find container \"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4\": container with ID starting with d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.138738 5120 scope.go:117] "RemoveContainer" containerID="fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.138990 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} err="failed to get container status \"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703\": rpc error: code = NotFound desc = could not find container \"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703\": container with ID starting with fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.139031 5120 scope.go:117] "RemoveContainer" containerID="e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.139308 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} err="failed to get container status \"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76\": rpc error: code = NotFound desc = could not find container \"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76\": container with ID starting with e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.139343 5120 scope.go:117] "RemoveContainer" containerID="76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.139568 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} err="failed to get container status \"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d\": rpc error: code = NotFound desc = could not find container \"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d\": container with ID starting with 76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.139606 5120 scope.go:117] "RemoveContainer" containerID="ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.140006 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} err="failed to get container status \"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df\": rpc error: code = NotFound desc = could not find container \"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df\": container with ID starting with ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.140073 5120 scope.go:117] "RemoveContainer" containerID="988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.140546 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} err="failed to get container status \"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c\": rpc error: code = NotFound desc = could not find container \"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c\": container with ID starting with 988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.140573 5120 scope.go:117] "RemoveContainer" containerID="c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.140882 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} err="failed to get container status \"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33\": rpc error: code = NotFound desc = could not find container \"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33\": container with ID starting with c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.140919 5120 scope.go:117] "RemoveContainer" containerID="b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.141207 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} err="failed to get container status \"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281\": rpc error: code = NotFound desc = could not find container \"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281\": container with ID starting with b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.141233 5120 scope.go:117] "RemoveContainer" containerID="4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.141456 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} err="failed to get container status \"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29\": rpc error: code = NotFound desc = could not find container \"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29\": container with ID starting with 4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.141492 5120 scope.go:117] "RemoveContainer" containerID="d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.141691 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4"} err="failed to get container status \"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4\": rpc error: code = NotFound desc = could not find container \"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4\": container with ID starting with d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.141715 5120 scope.go:117] "RemoveContainer" containerID="fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.141980 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703"} err="failed to get container status \"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703\": rpc error: code = NotFound desc = could not find container \"fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703\": container with ID starting with fb6ab2db7dedacc9368c568672f69847d52edc40aef395672717866593b19703 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.142010 5120 scope.go:117] "RemoveContainer" containerID="e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.142246 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76"} err="failed to get container status \"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76\": rpc error: code = NotFound desc = could not find container \"e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76\": container with ID starting with e65ef58ba33a4c988bed6034a40a7b2e59257399763d341f80fcbdb35b5d8d76 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.142270 5120 scope.go:117] "RemoveContainer" containerID="76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.142498 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d"} err="failed to get container status \"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d\": rpc error: code = NotFound desc = could not find container \"76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d\": container with ID starting with 76afcf916a0847b8892ce22a58e490ede797aab49e84ffcc9cf16eb54e80995d not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.142530 5120 scope.go:117] "RemoveContainer" containerID="ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.142722 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df"} err="failed to get container status \"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df\": rpc error: code = NotFound desc = could not find container \"ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df\": container with ID starting with ec7d4f9d11fedaf05603affe3ef8056bec1fb71cafa3e9fdb643e50012d086df not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.142745 5120 scope.go:117] "RemoveContainer" containerID="988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.142921 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c"} err="failed to get container status \"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c\": rpc error: code = NotFound desc = could not find container \"988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c\": container with ID starting with 988de64a79a8ad4378230c45c9d2e6865a282f62fca664882761bd068c420a9c not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.142953 5120 scope.go:117] "RemoveContainer" containerID="c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.143138 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33"} err="failed to get container status \"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33\": rpc error: code = NotFound desc = could not find container \"c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33\": container with ID starting with c8c913855ddf5e5b455aa5ff4dae8b56b95891197d8427846ac4ada1e0a31e33 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.143188 5120 scope.go:117] "RemoveContainer" containerID="b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.143408 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281"} err="failed to get container status \"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281\": rpc error: code = NotFound desc = could not find container \"b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281\": container with ID starting with b965f7029e699bdeece4a24816acedb204713e5c98284b2c72140d8e8c043281 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.143432 5120 scope.go:117] "RemoveContainer" containerID="4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.143684 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29"} err="failed to get container status \"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29\": rpc error: code = NotFound desc = could not find container \"4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29\": container with ID starting with 4f584ffd4c219d7379c165711299d04d0c32c1073aef6b808f91a1afa0ddad29 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.143722 5120 scope.go:117] "RemoveContainer" containerID="d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.143952 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4"} err="failed to get container status \"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4\": rpc error: code = NotFound desc = could not find container \"d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4\": container with ID starting with d0b8a3a9738ada95df16b32a75da31c56d4c8313dc56cf23bf35e5663f0883b4 not found: ID does not exist" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.148757 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpxct\" (UniqueName: \"kubernetes.io/projected/8b8ad823-fd58-4438-8d21-7e8cbe20252e-kube-api-access-zpxct\") pod \"ovnkube-node-sgcxk\" (UID: \"8b8ad823-fd58-4438-8d21-7e8cbe20252e\") " pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: W1211 16:10:12.151597 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ce49410_77ef_43ed_aac8_1865d83d52b6.slice/crio-f3a644ac29ab26fdbb214a19e4fd1f6645bf303bce3420ff24af00fe10ba9776 WatchSource:0}: Error finding container f3a644ac29ab26fdbb214a19e4fd1f6645bf303bce3420ff24af00fe10ba9776: Status 404 returned error can't find the container with id f3a644ac29ab26fdbb214a19e4fd1f6645bf303bce3420ff24af00fe10ba9776 Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.205761 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:12 crc kubenswrapper[5120]: W1211 16:10:12.250408 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8ad823_fd58_4438_8d21_7e8cbe20252e.slice/crio-3579cfaf4452a45793ffd2e24bd1da87c1c5f6023f9fb1b16a01501bd2606ecd WatchSource:0}: Error finding container 3579cfaf4452a45793ffd2e24bd1da87c1c5f6023f9fb1b16a01501bd2606ecd: Status 404 returned error can't find the container with id 3579cfaf4452a45793ffd2e24bd1da87c1c5f6023f9fb1b16a01501bd2606ecd Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.288639 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxt85"] Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.292011 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxt85"] Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.948874 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.949234 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzwn6" event={"ID":"7143452f-c193-4dbf-872c-a3ae9245f158","Type":"ContainerStarted","Data":"4af629d3fc5e3d6bc006b914d861f4c56a4a5217b60956acd480f04730fb295f"} Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.950813 5120 generic.go:358] "Generic (PLEG): container finished" podID="8b8ad823-fd58-4438-8d21-7e8cbe20252e" containerID="3cb80ae5767f083430adc8d798eecdb1d429ea8a6740855d4f60db99ce74e8bc" exitCode=0 Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.950999 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerDied","Data":"3cb80ae5767f083430adc8d798eecdb1d429ea8a6740855d4f60db99ce74e8bc"} Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.951053 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerStarted","Data":"3579cfaf4452a45793ffd2e24bd1da87c1c5f6023f9fb1b16a01501bd2606ecd"} Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.954070 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" event={"ID":"3ce49410-77ef-43ed-aac8-1865d83d52b6","Type":"ContainerStarted","Data":"2c50aa4329ab8015fb2e7af803f70d7333f08ef9b735500ef31fdf556bf4bed7"} Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.954117 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" event={"ID":"3ce49410-77ef-43ed-aac8-1865d83d52b6","Type":"ContainerStarted","Data":"aac8eafa92e04191199caad00f6a829765245d4e93a76f2218f2d705b5a6a4cc"} Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.954135 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" event={"ID":"3ce49410-77ef-43ed-aac8-1865d83d52b6","Type":"ContainerStarted","Data":"f3a644ac29ab26fdbb214a19e4fd1f6645bf303bce3420ff24af00fe10ba9776"} Dec 11 16:10:12 crc kubenswrapper[5120]: I1211 16:10:12.988285 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-64x8t" podStartSLOduration=1.988266736 podStartE2EDuration="1.988266736s" podCreationTimestamp="2025-12-11 16:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:10:12.98649289 +0000 UTC m=+562.240796251" watchObservedRunningTime="2025-12-11 16:10:12.988266736 +0000 UTC m=+562.242570077" Dec 11 16:10:13 crc kubenswrapper[5120]: I1211 16:10:13.032042 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38cdad44-c229-4500-b4e7-92c3cafb0974" path="/var/lib/kubelet/pods/38cdad44-c229-4500-b4e7-92c3cafb0974/volumes" Dec 11 16:10:13 crc kubenswrapper[5120]: I1211 16:10:13.032784 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d25df28f-e707-49ec-a539-9f1d1b40a297" path="/var/lib/kubelet/pods/d25df28f-e707-49ec-a539-9f1d1b40a297/volumes" Dec 11 16:10:13 crc kubenswrapper[5120]: I1211 16:10:13.963602 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerStarted","Data":"4f6842e0647f0da309e89748998535b27a8411f34ca50347ed6b626863775e39"} Dec 11 16:10:13 crc kubenswrapper[5120]: I1211 16:10:13.964100 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerStarted","Data":"76112247f0e54b4f9c3201c7178b5da9a84e6df58c5f97061c20936545b8dee0"} Dec 11 16:10:13 crc kubenswrapper[5120]: I1211 16:10:13.964122 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerStarted","Data":"b0912c63950c7504075ddfc0ee26347fd76ceffd619b60f9616fb72887a7c8c1"} Dec 11 16:10:13 crc kubenswrapper[5120]: I1211 16:10:13.964141 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerStarted","Data":"52977b80e4b3214da27741adb7e8a80818d40c39c0a75b92230762180e069b2e"} Dec 11 16:10:13 crc kubenswrapper[5120]: I1211 16:10:13.964189 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerStarted","Data":"f0891c72344ea9ac5dbd4a58c0858de65ac1004d720205d1e79bcd370bff5480"} Dec 11 16:10:13 crc kubenswrapper[5120]: I1211 16:10:13.964206 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerStarted","Data":"bf459161cf9d2839e1497b9e2e8d19bcc44304bf766762611dd3300c8c2b0276"} Dec 11 16:10:15 crc kubenswrapper[5120]: I1211 16:10:15.982218 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerStarted","Data":"6bafa1af26d52c3a09bd86d4af56a2f8a1242929a792adcad848dedb3f7be8e9"} Dec 11 16:10:20 crc kubenswrapper[5120]: I1211 16:10:20.012018 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" event={"ID":"8b8ad823-fd58-4438-8d21-7e8cbe20252e","Type":"ContainerStarted","Data":"37b178e6d28e7ab0a7bc070cc3aeb620235157f199cecca31b5f511d47b85d2b"} Dec 11 16:10:20 crc kubenswrapper[5120]: I1211 16:10:20.012594 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:20 crc kubenswrapper[5120]: I1211 16:10:20.012607 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:20 crc kubenswrapper[5120]: I1211 16:10:20.048604 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" podStartSLOduration=9.048588706 podStartE2EDuration="9.048588706s" podCreationTimestamp="2025-12-11 16:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:10:20.046960153 +0000 UTC m=+569.301263494" watchObservedRunningTime="2025-12-11 16:10:20.048588706 +0000 UTC m=+569.302892037" Dec 11 16:10:20 crc kubenswrapper[5120]: I1211 16:10:20.053094 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:21 crc kubenswrapper[5120]: I1211 16:10:21.017032 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:21 crc kubenswrapper[5120]: I1211 16:10:21.096031 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:28 crc kubenswrapper[5120]: I1211 16:10:28.717686 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:10:28 crc kubenswrapper[5120]: I1211 16:10:28.718238 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:10:51 crc kubenswrapper[5120]: I1211 16:10:51.212515 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:10:51 crc kubenswrapper[5120]: I1211 16:10:51.214624 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:10:51 crc kubenswrapper[5120]: I1211 16:10:51.219617 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:10:51 crc kubenswrapper[5120]: I1211 16:10:51.219871 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:10:53 crc kubenswrapper[5120]: I1211 16:10:53.055474 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sgcxk" Dec 11 16:10:58 crc kubenswrapper[5120]: I1211 16:10:58.717630 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:10:58 crc kubenswrapper[5120]: I1211 16:10:58.719033 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:10:59 crc kubenswrapper[5120]: I1211 16:10:59.197295 5120 ???:1] "http: TLS handshake error from 192.168.126.11:47586: no serving certificate available for the kubelet" Dec 11 16:11:19 crc kubenswrapper[5120]: I1211 16:11:19.369945 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ljt5"] Dec 11 16:11:19 crc kubenswrapper[5120]: I1211 16:11:19.371569 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9ljt5" podUID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerName="registry-server" containerID="cri-o://308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53" gracePeriod=30 Dec 11 16:11:19 crc kubenswrapper[5120]: I1211 16:11:19.778322 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:11:19 crc kubenswrapper[5120]: I1211 16:11:19.963415 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-utilities\") pod \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " Dec 11 16:11:19 crc kubenswrapper[5120]: I1211 16:11:19.963528 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-catalog-content\") pod \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " Dec 11 16:11:19 crc kubenswrapper[5120]: I1211 16:11:19.963644 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p8cz\" (UniqueName: \"kubernetes.io/projected/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-kube-api-access-9p8cz\") pod \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\" (UID: \"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781\") " Dec 11 16:11:19 crc kubenswrapper[5120]: I1211 16:11:19.966279 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-utilities" (OuterVolumeSpecName: "utilities") pod "1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" (UID: "1dbe1b0d-3e35-48b8-93e3-e1aa3665a781"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:11:19 crc kubenswrapper[5120]: I1211 16:11:19.975388 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-kube-api-access-9p8cz" (OuterVolumeSpecName: "kube-api-access-9p8cz") pod "1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" (UID: "1dbe1b0d-3e35-48b8-93e3-e1aa3665a781"). InnerVolumeSpecName "kube-api-access-9p8cz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:11:19 crc kubenswrapper[5120]: I1211 16:11:19.989600 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" (UID: "1dbe1b0d-3e35-48b8-93e3-e1aa3665a781"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.065893 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.065985 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.066018 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9p8cz\" (UniqueName: \"kubernetes.io/projected/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781-kube-api-access-9p8cz\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.383175 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-zqqdz"] Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.384273 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerName="extract-content" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.384299 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerName="extract-content" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.384331 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerName="extract-utilities" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.384345 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerName="extract-utilities" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.384364 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerName="registry-server" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.384378 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerName="registry-server" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.384597 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerName="registry-server" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.397020 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.398093 5120 generic.go:358] "Generic (PLEG): container finished" podID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" containerID="308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53" exitCode=0 Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.398290 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ljt5" event={"ID":"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781","Type":"ContainerDied","Data":"308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53"} Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.398340 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ljt5" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.398376 5120 scope.go:117] "RemoveContainer" containerID="308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.398356 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ljt5" event={"ID":"1dbe1b0d-3e35-48b8-93e3-e1aa3665a781","Type":"ContainerDied","Data":"e6dfbd92e60611900491f281107243ab4958650d58a49cbdfff5b6b4b0563613"} Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.409969 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-zqqdz"] Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.432525 5120 scope.go:117] "RemoveContainer" containerID="cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.470517 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ljt5"] Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.474528 5120 scope.go:117] "RemoveContainer" containerID="15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.477884 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ljt5"] Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.499969 5120 scope.go:117] "RemoveContainer" containerID="308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53" Dec 11 16:11:20 crc kubenswrapper[5120]: E1211 16:11:20.501518 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53\": container with ID starting with 308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53 not found: ID does not exist" containerID="308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.501573 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53"} err="failed to get container status \"308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53\": rpc error: code = NotFound desc = could not find container \"308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53\": container with ID starting with 308b2fad4b9bd9f3e7c0cfaf4a218e9aa751091ec7dbe061ef36d37cfbfa1f53 not found: ID does not exist" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.501608 5120 scope.go:117] "RemoveContainer" containerID="cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3" Dec 11 16:11:20 crc kubenswrapper[5120]: E1211 16:11:20.502895 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3\": container with ID starting with cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3 not found: ID does not exist" containerID="cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.502987 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3"} err="failed to get container status \"cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3\": rpc error: code = NotFound desc = could not find container \"cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3\": container with ID starting with cc4cd19e06d8ae0b6a65b7bfce858692e2edad1c7d7eb8a7b06b2151b5ba23f3 not found: ID does not exist" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.503058 5120 scope.go:117] "RemoveContainer" containerID="15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95" Dec 11 16:11:20 crc kubenswrapper[5120]: E1211 16:11:20.504033 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95\": container with ID starting with 15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95 not found: ID does not exist" containerID="15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.504120 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95"} err="failed to get container status \"15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95\": rpc error: code = NotFound desc = could not find container \"15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95\": container with ID starting with 15b0e13a16027d59fd6174442d8ea7492835a34ea1ccf7efe9ad9db02606de95 not found: ID does not exist" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.577645 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-registry-tls\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.577875 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-bound-sa-token\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.577964 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.578071 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-registry-certificates\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.578186 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.578271 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.578347 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-trusted-ca\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.578429 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7pw6\" (UniqueName: \"kubernetes.io/projected/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-kube-api-access-c7pw6\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.596266 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.679680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-registry-certificates\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.679756 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.679788 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-trusted-ca\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.679824 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c7pw6\" (UniqueName: \"kubernetes.io/projected/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-kube-api-access-c7pw6\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.679897 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-registry-tls\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.679920 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-bound-sa-token\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.679948 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.680892 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.683348 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-registry-certificates\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.683545 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-trusted-ca\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.686353 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-registry-tls\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.687495 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.709342 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-bound-sa-token\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.713336 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7pw6\" (UniqueName: \"kubernetes.io/projected/1da5c17d-a65c-47ad-9bec-27dd4c4174d4-kube-api-access-c7pw6\") pod \"image-registry-5d9d95bf5b-zqqdz\" (UID: \"1da5c17d-a65c-47ad-9bec-27dd4c4174d4\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:20 crc kubenswrapper[5120]: I1211 16:11:20.722559 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:21 crc kubenswrapper[5120]: I1211 16:11:21.030580 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dbe1b0d-3e35-48b8-93e3-e1aa3665a781" path="/var/lib/kubelet/pods/1dbe1b0d-3e35-48b8-93e3-e1aa3665a781/volumes" Dec 11 16:11:21 crc kubenswrapper[5120]: I1211 16:11:21.241522 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-zqqdz"] Dec 11 16:11:21 crc kubenswrapper[5120]: I1211 16:11:21.261937 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 16:11:21 crc kubenswrapper[5120]: I1211 16:11:21.407447 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" event={"ID":"1da5c17d-a65c-47ad-9bec-27dd4c4174d4","Type":"ContainerStarted","Data":"f09beba33044a36f6a397279644210db4d3569f5e1e079d2286f5b7ca9ba67d6"} Dec 11 16:11:21 crc kubenswrapper[5120]: I1211 16:11:21.407494 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" event={"ID":"1da5c17d-a65c-47ad-9bec-27dd4c4174d4","Type":"ContainerStarted","Data":"a5f43a6a3819a053315d1e582c19d113abb4ed7043e9d287243a49fa4acb1bd7"} Dec 11 16:11:21 crc kubenswrapper[5120]: I1211 16:11:21.408955 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:21 crc kubenswrapper[5120]: I1211 16:11:21.426583 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" podStartSLOduration=1.426563617 podStartE2EDuration="1.426563617s" podCreationTimestamp="2025-12-11 16:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:11:21.425498359 +0000 UTC m=+630.679801700" watchObservedRunningTime="2025-12-11 16:11:21.426563617 +0000 UTC m=+630.680866948" Dec 11 16:11:22 crc kubenswrapper[5120]: I1211 16:11:22.986905 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz"] Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.000132 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.000222 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz"] Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.007204 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.114782 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2687\" (UniqueName: \"kubernetes.io/projected/72420a87-2686-4ff0-85ae-46903ab88c8b-kube-api-access-w2687\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.114862 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.114893 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.216625 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w2687\" (UniqueName: \"kubernetes.io/projected/72420a87-2686-4ff0-85ae-46903ab88c8b-kube-api-access-w2687\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.216681 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.216714 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.217403 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.217555 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.243016 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2687\" (UniqueName: \"kubernetes.io/projected/72420a87-2686-4ff0-85ae-46903ab88c8b-kube-api-access-w2687\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.321060 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:23 crc kubenswrapper[5120]: I1211 16:11:23.798327 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz"] Dec 11 16:11:23 crc kubenswrapper[5120]: W1211 16:11:23.805515 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72420a87_2686_4ff0_85ae_46903ab88c8b.slice/crio-d57e0e48a8031f3d65f2fdc6d3fc46f9ecb23d0bb9090ef180159d841a2c5b4b WatchSource:0}: Error finding container d57e0e48a8031f3d65f2fdc6d3fc46f9ecb23d0bb9090ef180159d841a2c5b4b: Status 404 returned error can't find the container with id d57e0e48a8031f3d65f2fdc6d3fc46f9ecb23d0bb9090ef180159d841a2c5b4b Dec 11 16:11:24 crc kubenswrapper[5120]: I1211 16:11:24.431139 5120 generic.go:358] "Generic (PLEG): container finished" podID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerID="e291cf5c68c5541f8b0d99cd1053f6e71fcf1d54058c051fc92bc3a77b6f0a82" exitCode=0 Dec 11 16:11:24 crc kubenswrapper[5120]: I1211 16:11:24.431231 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" event={"ID":"72420a87-2686-4ff0-85ae-46903ab88c8b","Type":"ContainerDied","Data":"e291cf5c68c5541f8b0d99cd1053f6e71fcf1d54058c051fc92bc3a77b6f0a82"} Dec 11 16:11:24 crc kubenswrapper[5120]: I1211 16:11:24.431354 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" event={"ID":"72420a87-2686-4ff0-85ae-46903ab88c8b","Type":"ContainerStarted","Data":"d57e0e48a8031f3d65f2fdc6d3fc46f9ecb23d0bb9090ef180159d841a2c5b4b"} Dec 11 16:11:26 crc kubenswrapper[5120]: I1211 16:11:26.444824 5120 generic.go:358] "Generic (PLEG): container finished" podID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerID="b0f71fa65888c4aa897c99284e3c8bdc07cc1b126bd70b144151dff22ef3612d" exitCode=0 Dec 11 16:11:26 crc kubenswrapper[5120]: I1211 16:11:26.444896 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" event={"ID":"72420a87-2686-4ff0-85ae-46903ab88c8b","Type":"ContainerDied","Data":"b0f71fa65888c4aa897c99284e3c8bdc07cc1b126bd70b144151dff22ef3612d"} Dec 11 16:11:27 crc kubenswrapper[5120]: I1211 16:11:27.456313 5120 generic.go:358] "Generic (PLEG): container finished" podID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerID="ad8496bfc6e9979ceffd4152cd9f1def5b94b6db0c94716f68078927b0a16aba" exitCode=0 Dec 11 16:11:27 crc kubenswrapper[5120]: I1211 16:11:27.456471 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" event={"ID":"72420a87-2686-4ff0-85ae-46903ab88c8b","Type":"ContainerDied","Data":"ad8496bfc6e9979ceffd4152cd9f1def5b94b6db0c94716f68078927b0a16aba"} Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.717813 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.718305 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.718373 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.719088 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a09fb695df5d1b3ee680128c4cd59d89388d5ef467e74023daef155b556f17c3"} pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.719205 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" containerID="cri-o://a09fb695df5d1b3ee680128c4cd59d89388d5ef467e74023daef155b556f17c3" gracePeriod=600 Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.799637 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.911398 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-util\") pod \"72420a87-2686-4ff0-85ae-46903ab88c8b\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.911460 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2687\" (UniqueName: \"kubernetes.io/projected/72420a87-2686-4ff0-85ae-46903ab88c8b-kube-api-access-w2687\") pod \"72420a87-2686-4ff0-85ae-46903ab88c8b\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.911566 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-bundle\") pod \"72420a87-2686-4ff0-85ae-46903ab88c8b\" (UID: \"72420a87-2686-4ff0-85ae-46903ab88c8b\") " Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.914680 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-bundle" (OuterVolumeSpecName: "bundle") pod "72420a87-2686-4ff0-85ae-46903ab88c8b" (UID: "72420a87-2686-4ff0-85ae-46903ab88c8b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.922948 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72420a87-2686-4ff0-85ae-46903ab88c8b-kube-api-access-w2687" (OuterVolumeSpecName: "kube-api-access-w2687") pod "72420a87-2686-4ff0-85ae-46903ab88c8b" (UID: "72420a87-2686-4ff0-85ae-46903ab88c8b"). InnerVolumeSpecName "kube-api-access-w2687". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:11:28 crc kubenswrapper[5120]: I1211 16:11:28.925875 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-util" (OuterVolumeSpecName: "util") pod "72420a87-2686-4ff0-85ae-46903ab88c8b" (UID: "72420a87-2686-4ff0-85ae-46903ab88c8b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.013383 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.013418 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72420a87-2686-4ff0-85ae-46903ab88c8b-util\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.013431 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w2687\" (UniqueName: \"kubernetes.io/projected/72420a87-2686-4ff0-85ae-46903ab88c8b-kube-api-access-w2687\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.472622 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" event={"ID":"72420a87-2686-4ff0-85ae-46903ab88c8b","Type":"ContainerDied","Data":"d57e0e48a8031f3d65f2fdc6d3fc46f9ecb23d0bb9090ef180159d841a2c5b4b"} Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.472913 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d57e0e48a8031f3d65f2fdc6d3fc46f9ecb23d0bb9090ef180159d841a2c5b4b" Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.472716 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fjjmz" Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.476303 5120 generic.go:358] "Generic (PLEG): container finished" podID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerID="a09fb695df5d1b3ee680128c4cd59d89388d5ef467e74023daef155b556f17c3" exitCode=0 Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.476432 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerDied","Data":"a09fb695df5d1b3ee680128c4cd59d89388d5ef467e74023daef155b556f17c3"} Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.476515 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"c1c4951fd13c7ebf545cc70952dba6bad301362a8233620d9c4df1820bb44170"} Dec 11 16:11:29 crc kubenswrapper[5120]: I1211 16:11:29.476546 5120 scope.go:117] "RemoveContainer" containerID="37cd81fdaf9b948884ac7f04fbf6a66e92e823688abb89ee1140d9a2b9d90eb4" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.562702 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz"] Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.563621 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerName="extract" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.563639 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerName="extract" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.563668 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerName="util" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.563677 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerName="util" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.563693 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerName="pull" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.563700 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerName="pull" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.563808 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="72420a87-2686-4ff0-85ae-46903ab88c8b" containerName="extract" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.568222 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.572977 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.580276 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz"] Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.641137 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.641665 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9xh\" (UniqueName: \"kubernetes.io/projected/10ebb9de-0ade-4d0b-9dbe-45c91196e002-kube-api-access-df9xh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.642037 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.744096 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.744237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-df9xh\" (UniqueName: \"kubernetes.io/projected/10ebb9de-0ade-4d0b-9dbe-45c91196e002-kube-api-access-df9xh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.744369 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.745289 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.745330 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.776944 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-df9xh\" (UniqueName: \"kubernetes.io/projected/10ebb9de-0ade-4d0b-9dbe-45c91196e002-kube-api-access-df9xh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:30 crc kubenswrapper[5120]: I1211 16:11:30.890887 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:31 crc kubenswrapper[5120]: I1211 16:11:31.187580 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz"] Dec 11 16:11:31 crc kubenswrapper[5120]: W1211 16:11:31.195422 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10ebb9de_0ade_4d0b_9dbe_45c91196e002.slice/crio-a74a069ff584d2254f01fa0aac820e108edaef67d68d2c7cfd803b7ab9ae37c9 WatchSource:0}: Error finding container a74a069ff584d2254f01fa0aac820e108edaef67d68d2c7cfd803b7ab9ae37c9: Status 404 returned error can't find the container with id a74a069ff584d2254f01fa0aac820e108edaef67d68d2c7cfd803b7ab9ae37c9 Dec 11 16:11:31 crc kubenswrapper[5120]: I1211 16:11:31.500324 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" event={"ID":"10ebb9de-0ade-4d0b-9dbe-45c91196e002","Type":"ContainerStarted","Data":"800589fb202c692a474599a279b243dc0b8dd84dfddc70159872594d11af9c15"} Dec 11 16:11:31 crc kubenswrapper[5120]: I1211 16:11:31.500376 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" event={"ID":"10ebb9de-0ade-4d0b-9dbe-45c91196e002","Type":"ContainerStarted","Data":"a74a069ff584d2254f01fa0aac820e108edaef67d68d2c7cfd803b7ab9ae37c9"} Dec 11 16:11:32 crc kubenswrapper[5120]: I1211 16:11:32.507298 5120 generic.go:358] "Generic (PLEG): container finished" podID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerID="800589fb202c692a474599a279b243dc0b8dd84dfddc70159872594d11af9c15" exitCode=0 Dec 11 16:11:32 crc kubenswrapper[5120]: I1211 16:11:32.507535 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" event={"ID":"10ebb9de-0ade-4d0b-9dbe-45c91196e002","Type":"ContainerDied","Data":"800589fb202c692a474599a279b243dc0b8dd84dfddc70159872594d11af9c15"} Dec 11 16:11:34 crc kubenswrapper[5120]: I1211 16:11:34.520914 5120 generic.go:358] "Generic (PLEG): container finished" podID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerID="e4446468dca1c3bc6f612eca3a2e6c7412a92d3287855906e8cc63f38104c2e1" exitCode=0 Dec 11 16:11:34 crc kubenswrapper[5120]: I1211 16:11:34.520951 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" event={"ID":"10ebb9de-0ade-4d0b-9dbe-45c91196e002","Type":"ContainerDied","Data":"e4446468dca1c3bc6f612eca3a2e6c7412a92d3287855906e8cc63f38104c2e1"} Dec 11 16:11:35 crc kubenswrapper[5120]: I1211 16:11:35.527514 5120 generic.go:358] "Generic (PLEG): container finished" podID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerID="2b30112585810c89edde5896c36e49d23753afaeac79c7af0d12ce6870251972" exitCode=0 Dec 11 16:11:35 crc kubenswrapper[5120]: I1211 16:11:35.527575 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" event={"ID":"10ebb9de-0ade-4d0b-9dbe-45c91196e002","Type":"ContainerDied","Data":"2b30112585810c89edde5896c36e49d23753afaeac79c7af0d12ce6870251972"} Dec 11 16:11:35 crc kubenswrapper[5120]: I1211 16:11:35.831452 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg"] Dec 11 16:11:35 crc kubenswrapper[5120]: I1211 16:11:35.838814 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:35 crc kubenswrapper[5120]: I1211 16:11:35.846945 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg"] Dec 11 16:11:35 crc kubenswrapper[5120]: I1211 16:11:35.930804 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq92g\" (UniqueName: \"kubernetes.io/projected/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-kube-api-access-jq92g\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:35 crc kubenswrapper[5120]: I1211 16:11:35.930865 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:35 crc kubenswrapper[5120]: I1211 16:11:35.931014 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.032219 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jq92g\" (UniqueName: \"kubernetes.io/projected/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-kube-api-access-jq92g\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.032519 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.032703 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.033119 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.033166 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.060535 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq92g\" (UniqueName: \"kubernetes.io/projected/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-kube-api-access-jq92g\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.150833 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.632611 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg"] Dec 11 16:11:36 crc kubenswrapper[5120]: W1211 16:11:36.647398 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf032a0_16b4_4a9d_a85a_f0c2e9f5cfde.slice/crio-7b3af9c309e204f2b7c91265008dd96e77b46813c2d201cf34a465eb1addf070 WatchSource:0}: Error finding container 7b3af9c309e204f2b7c91265008dd96e77b46813c2d201cf34a465eb1addf070: Status 404 returned error can't find the container with id 7b3af9c309e204f2b7c91265008dd96e77b46813c2d201cf34a465eb1addf070 Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.740736 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.851747 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-util\") pod \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.852067 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-bundle\") pod \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.852269 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df9xh\" (UniqueName: \"kubernetes.io/projected/10ebb9de-0ade-4d0b-9dbe-45c91196e002-kube-api-access-df9xh\") pod \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\" (UID: \"10ebb9de-0ade-4d0b-9dbe-45c91196e002\") " Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.853101 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-bundle" (OuterVolumeSpecName: "bundle") pod "10ebb9de-0ade-4d0b-9dbe-45c91196e002" (UID: "10ebb9de-0ade-4d0b-9dbe-45c91196e002"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.858246 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ebb9de-0ade-4d0b-9dbe-45c91196e002-kube-api-access-df9xh" (OuterVolumeSpecName: "kube-api-access-df9xh") pod "10ebb9de-0ade-4d0b-9dbe-45c91196e002" (UID: "10ebb9de-0ade-4d0b-9dbe-45c91196e002"). InnerVolumeSpecName "kube-api-access-df9xh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.954177 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-df9xh\" (UniqueName: \"kubernetes.io/projected/10ebb9de-0ade-4d0b-9dbe-45c91196e002-kube-api-access-df9xh\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.954216 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:36 crc kubenswrapper[5120]: I1211 16:11:36.989416 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-util" (OuterVolumeSpecName: "util") pod "10ebb9de-0ade-4d0b-9dbe-45c91196e002" (UID: "10ebb9de-0ade-4d0b-9dbe-45c91196e002"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:11:37 crc kubenswrapper[5120]: I1211 16:11:37.055217 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10ebb9de-0ade-4d0b-9dbe-45c91196e002-util\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:37 crc kubenswrapper[5120]: I1211 16:11:37.541503 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" Dec 11 16:11:37 crc kubenswrapper[5120]: I1211 16:11:37.541539 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ebh5hz" event={"ID":"10ebb9de-0ade-4d0b-9dbe-45c91196e002","Type":"ContainerDied","Data":"a74a069ff584d2254f01fa0aac820e108edaef67d68d2c7cfd803b7ab9ae37c9"} Dec 11 16:11:37 crc kubenswrapper[5120]: I1211 16:11:37.542376 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74a069ff584d2254f01fa0aac820e108edaef67d68d2c7cfd803b7ab9ae37c9" Dec 11 16:11:37 crc kubenswrapper[5120]: I1211 16:11:37.543060 5120 generic.go:358] "Generic (PLEG): container finished" podID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerID="fc3a21baada49f9be9dcdb1d86e77ba64bca38e0f0ac1760446d409d8e700819" exitCode=0 Dec 11 16:11:37 crc kubenswrapper[5120]: I1211 16:11:37.543158 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" event={"ID":"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde","Type":"ContainerDied","Data":"fc3a21baada49f9be9dcdb1d86e77ba64bca38e0f0ac1760446d409d8e700819"} Dec 11 16:11:37 crc kubenswrapper[5120]: I1211 16:11:37.543203 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" event={"ID":"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde","Type":"ContainerStarted","Data":"7b3af9c309e204f2b7c91265008dd96e77b46813c2d201cf34a465eb1addf070"} Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.908996 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-9dp5g"] Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.909561 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerName="util" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.909573 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerName="util" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.909583 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerName="pull" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.909590 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerName="pull" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.909608 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerName="extract" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.909614 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerName="extract" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.909712 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="10ebb9de-0ade-4d0b-9dbe-45c91196e002" containerName="extract" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.921633 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-9dp5g" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.923194 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-9dp5g"] Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.924370 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-ch6l2\"" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.924662 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.927447 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.955115 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp"] Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.965361 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.969266 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.969418 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-zwwnf\"" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.969466 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk"] Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.975946 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp"] Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.976056 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.980338 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk"] Dec 11 16:11:39 crc kubenswrapper[5120]: I1211 16:11:39.993756 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654vp\" (UniqueName: \"kubernetes.io/projected/405e4958-9400-4b75-baf7-ad3ad1361162-kube-api-access-654vp\") pod \"obo-prometheus-operator-86648f486b-9dp5g\" (UID: \"405e4958-9400-4b75-baf7-ad3ad1361162\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-9dp5g" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.094583 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e02e8f19-f7bb-4dad-8b0d-1c821b085b6b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp\" (UID: \"e02e8f19-f7bb-4dad-8b0d-1c821b085b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.094646 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-654vp\" (UniqueName: \"kubernetes.io/projected/405e4958-9400-4b75-baf7-ad3ad1361162-kube-api-access-654vp\") pod \"obo-prometheus-operator-86648f486b-9dp5g\" (UID: \"405e4958-9400-4b75-baf7-ad3ad1361162\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-9dp5g" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.094703 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2484e4c7-6f94-4161-93d0-0f55b304286c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk\" (UID: \"2484e4c7-6f94-4161-93d0-0f55b304286c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.094723 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e02e8f19-f7bb-4dad-8b0d-1c821b085b6b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp\" (UID: \"e02e8f19-f7bb-4dad-8b0d-1c821b085b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.094860 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2484e4c7-6f94-4161-93d0-0f55b304286c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk\" (UID: \"2484e4c7-6f94-4161-93d0-0f55b304286c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.125360 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-654vp\" (UniqueName: \"kubernetes.io/projected/405e4958-9400-4b75-baf7-ad3ad1361162-kube-api-access-654vp\") pod \"obo-prometheus-operator-86648f486b-9dp5g\" (UID: \"405e4958-9400-4b75-baf7-ad3ad1361162\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-9dp5g" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.179490 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-mw4h4"] Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.188403 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.196365 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.196483 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-k94mn\"" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.196988 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e02e8f19-f7bb-4dad-8b0d-1c821b085b6b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp\" (UID: \"e02e8f19-f7bb-4dad-8b0d-1c821b085b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.197103 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2484e4c7-6f94-4161-93d0-0f55b304286c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk\" (UID: \"2484e4c7-6f94-4161-93d0-0f55b304286c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.197197 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e02e8f19-f7bb-4dad-8b0d-1c821b085b6b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp\" (UID: \"e02e8f19-f7bb-4dad-8b0d-1c821b085b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.197332 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2484e4c7-6f94-4161-93d0-0f55b304286c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk\" (UID: \"2484e4c7-6f94-4161-93d0-0f55b304286c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.200557 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2484e4c7-6f94-4161-93d0-0f55b304286c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk\" (UID: \"2484e4c7-6f94-4161-93d0-0f55b304286c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.201700 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2484e4c7-6f94-4161-93d0-0f55b304286c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk\" (UID: \"2484e4c7-6f94-4161-93d0-0f55b304286c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.202105 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e02e8f19-f7bb-4dad-8b0d-1c821b085b6b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp\" (UID: \"e02e8f19-f7bb-4dad-8b0d-1c821b085b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.204093 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-mw4h4"] Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.206589 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e02e8f19-f7bb-4dad-8b0d-1c821b085b6b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp\" (UID: \"e02e8f19-f7bb-4dad-8b0d-1c821b085b6b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.241240 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-9dp5g" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.280423 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.298958 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4801f96-9fe0-4763-8cd9-f5a68adfeeb9-observability-operator-tls\") pod \"observability-operator-78c97476f4-mw4h4\" (UID: \"c4801f96-9fe0-4763-8cd9-f5a68adfeeb9\") " pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.299093 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6nl\" (UniqueName: \"kubernetes.io/projected/c4801f96-9fe0-4763-8cd9-f5a68adfeeb9-kube-api-access-8w6nl\") pod \"observability-operator-78c97476f4-mw4h4\" (UID: \"c4801f96-9fe0-4763-8cd9-f5a68adfeeb9\") " pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.323585 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.378573 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-x5tz9"] Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.391967 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.394729 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-9bj47\"" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.398990 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-x5tz9"] Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.400093 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4801f96-9fe0-4763-8cd9-f5a68adfeeb9-observability-operator-tls\") pod \"observability-operator-78c97476f4-mw4h4\" (UID: \"c4801f96-9fe0-4763-8cd9-f5a68adfeeb9\") " pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.400138 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6nl\" (UniqueName: \"kubernetes.io/projected/c4801f96-9fe0-4763-8cd9-f5a68adfeeb9-kube-api-access-8w6nl\") pod \"observability-operator-78c97476f4-mw4h4\" (UID: \"c4801f96-9fe0-4763-8cd9-f5a68adfeeb9\") " pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.407971 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4801f96-9fe0-4763-8cd9-f5a68adfeeb9-observability-operator-tls\") pod \"observability-operator-78c97476f4-mw4h4\" (UID: \"c4801f96-9fe0-4763-8cd9-f5a68adfeeb9\") " pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.419317 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6nl\" (UniqueName: \"kubernetes.io/projected/c4801f96-9fe0-4763-8cd9-f5a68adfeeb9-kube-api-access-8w6nl\") pod \"observability-operator-78c97476f4-mw4h4\" (UID: \"c4801f96-9fe0-4763-8cd9-f5a68adfeeb9\") " pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.501526 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7n7x\" (UniqueName: \"kubernetes.io/projected/046886b8-c2ac-4277-88e4-bd219b56d6b0-kube-api-access-r7n7x\") pod \"perses-operator-68bdb49cbf-x5tz9\" (UID: \"046886b8-c2ac-4277-88e4-bd219b56d6b0\") " pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.501614 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/046886b8-c2ac-4277-88e4-bd219b56d6b0-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-x5tz9\" (UID: \"046886b8-c2ac-4277-88e4-bd219b56d6b0\") " pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.502857 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.602969 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7n7x\" (UniqueName: \"kubernetes.io/projected/046886b8-c2ac-4277-88e4-bd219b56d6b0-kube-api-access-r7n7x\") pod \"perses-operator-68bdb49cbf-x5tz9\" (UID: \"046886b8-c2ac-4277-88e4-bd219b56d6b0\") " pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.603058 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/046886b8-c2ac-4277-88e4-bd219b56d6b0-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-x5tz9\" (UID: \"046886b8-c2ac-4277-88e4-bd219b56d6b0\") " pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.604109 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/046886b8-c2ac-4277-88e4-bd219b56d6b0-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-x5tz9\" (UID: \"046886b8-c2ac-4277-88e4-bd219b56d6b0\") " pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.618913 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7n7x\" (UniqueName: \"kubernetes.io/projected/046886b8-c2ac-4277-88e4-bd219b56d6b0-kube-api-access-r7n7x\") pod \"perses-operator-68bdb49cbf-x5tz9\" (UID: \"046886b8-c2ac-4277-88e4-bd219b56d6b0\") " pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:11:40 crc kubenswrapper[5120]: I1211 16:11:40.722531 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:11:42 crc kubenswrapper[5120]: I1211 16:11:42.429753 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-zqqdz" Dec 11 16:11:42 crc kubenswrapper[5120]: I1211 16:11:42.517119 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s2npb"] Dec 11 16:11:43 crc kubenswrapper[5120]: W1211 16:11:43.070506 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod405e4958_9400_4b75_baf7_ad3ad1361162.slice/crio-af9500752d331e63bf5b1a2a5a646cc4d762c3d43321e35cb9b52d3370fa953c WatchSource:0}: Error finding container af9500752d331e63bf5b1a2a5a646cc4d762c3d43321e35cb9b52d3370fa953c: Status 404 returned error can't find the container with id af9500752d331e63bf5b1a2a5a646cc4d762c3d43321e35cb9b52d3370fa953c Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.072911 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-9dp5g"] Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.083784 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk"] Dec 11 16:11:43 crc kubenswrapper[5120]: W1211 16:11:43.087229 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2484e4c7_6f94_4161_93d0_0f55b304286c.slice/crio-920ddd7f0dded90b2c7571c407f9ca332c9c23488e8b441cb0c268d36df9a12a WatchSource:0}: Error finding container 920ddd7f0dded90b2c7571c407f9ca332c9c23488e8b441cb0c268d36df9a12a: Status 404 returned error can't find the container with id 920ddd7f0dded90b2c7571c407f9ca332c9c23488e8b441cb0c268d36df9a12a Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.095024 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-mw4h4"] Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.099454 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-x5tz9"] Dec 11 16:11:43 crc kubenswrapper[5120]: W1211 16:11:43.110270 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod046886b8_c2ac_4277_88e4_bd219b56d6b0.slice/crio-8800d9570cd7165160be6ce4dcb65fcad55d155dafac0e4f5f7c2c5722e8ba82 WatchSource:0}: Error finding container 8800d9570cd7165160be6ce4dcb65fcad55d155dafac0e4f5f7c2c5722e8ba82: Status 404 returned error can't find the container with id 8800d9570cd7165160be6ce4dcb65fcad55d155dafac0e4f5f7c2c5722e8ba82 Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.133342 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp"] Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.599795 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-9dp5g" event={"ID":"405e4958-9400-4b75-baf7-ad3ad1361162","Type":"ContainerStarted","Data":"af9500752d331e63bf5b1a2a5a646cc4d762c3d43321e35cb9b52d3370fa953c"} Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.601286 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" event={"ID":"e02e8f19-f7bb-4dad-8b0d-1c821b085b6b","Type":"ContainerStarted","Data":"cf2ed43e8292a57299d4d195aca70a7d846e02f4e86534e32786f423928bcdcd"} Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.602498 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-mw4h4" event={"ID":"c4801f96-9fe0-4763-8cd9-f5a68adfeeb9","Type":"ContainerStarted","Data":"124d8444f916ba488e6c1389d85c1401e71d21d92d9e9d53c745e501155728a1"} Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.603765 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" event={"ID":"2484e4c7-6f94-4161-93d0-0f55b304286c","Type":"ContainerStarted","Data":"920ddd7f0dded90b2c7571c407f9ca332c9c23488e8b441cb0c268d36df9a12a"} Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.604914 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" event={"ID":"046886b8-c2ac-4277-88e4-bd219b56d6b0","Type":"ContainerStarted","Data":"8800d9570cd7165160be6ce4dcb65fcad55d155dafac0e4f5f7c2c5722e8ba82"} Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.607408 5120 generic.go:358] "Generic (PLEG): container finished" podID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerID="6c8d0d1b5666cf196cb8fbf33b69e8bed05b0f46ceea7a8d515ab2ec1dabdde4" exitCode=0 Dec 11 16:11:43 crc kubenswrapper[5120]: I1211 16:11:43.607468 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" event={"ID":"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde","Type":"ContainerDied","Data":"6c8d0d1b5666cf196cb8fbf33b69e8bed05b0f46ceea7a8d515ab2ec1dabdde4"} Dec 11 16:11:44 crc kubenswrapper[5120]: I1211 16:11:44.628719 5120 generic.go:358] "Generic (PLEG): container finished" podID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerID="c9e2bfec0801d99839de1dc980299884e0cd3d9d9c39adfe6c3ede3bc42378a0" exitCode=0 Dec 11 16:11:44 crc kubenswrapper[5120]: I1211 16:11:44.629115 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" event={"ID":"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde","Type":"ContainerDied","Data":"c9e2bfec0801d99839de1dc980299884e0cd3d9d9c39adfe6c3ede3bc42378a0"} Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.020229 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.041431 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-bundle\") pod \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.042040 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-util\") pod \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.042080 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq92g\" (UniqueName: \"kubernetes.io/projected/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-kube-api-access-jq92g\") pod \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\" (UID: \"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde\") " Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.049243 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-bundle" (OuterVolumeSpecName: "bundle") pod "cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" (UID: "cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.072319 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-kube-api-access-jq92g" (OuterVolumeSpecName: "kube-api-access-jq92g") pod "cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" (UID: "cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde"). InnerVolumeSpecName "kube-api-access-jq92g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.072760 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-util" (OuterVolumeSpecName: "util") pod "cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" (UID: "cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.143132 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.143176 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-util\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.143186 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jq92g\" (UniqueName: \"kubernetes.io/projected/cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde-kube-api-access-jq92g\") on node \"crc\" DevicePath \"\"" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.383752 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7cf98dd5bf-wzjcb"] Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.384558 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerName="util" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.384571 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerName="util" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.384587 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerName="extract" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.384593 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerName="extract" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.384603 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerName="pull" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.384610 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerName="pull" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.384814 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde" containerName="extract" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.389516 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.393085 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.396886 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.397263 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-vftbb\"" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.397410 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.401967 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7cf98dd5bf-wzjcb"] Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.453912 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8eec4aeb-49af-4a60-abcd-aff7fcc250dc-apiservice-cert\") pod \"elastic-operator-7cf98dd5bf-wzjcb\" (UID: \"8eec4aeb-49af-4a60-abcd-aff7fcc250dc\") " pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.453972 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t8ng\" (UniqueName: \"kubernetes.io/projected/8eec4aeb-49af-4a60-abcd-aff7fcc250dc-kube-api-access-6t8ng\") pod \"elastic-operator-7cf98dd5bf-wzjcb\" (UID: \"8eec4aeb-49af-4a60-abcd-aff7fcc250dc\") " pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.454034 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8eec4aeb-49af-4a60-abcd-aff7fcc250dc-webhook-cert\") pod \"elastic-operator-7cf98dd5bf-wzjcb\" (UID: \"8eec4aeb-49af-4a60-abcd-aff7fcc250dc\") " pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.554819 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6t8ng\" (UniqueName: \"kubernetes.io/projected/8eec4aeb-49af-4a60-abcd-aff7fcc250dc-kube-api-access-6t8ng\") pod \"elastic-operator-7cf98dd5bf-wzjcb\" (UID: \"8eec4aeb-49af-4a60-abcd-aff7fcc250dc\") " pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.554916 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8eec4aeb-49af-4a60-abcd-aff7fcc250dc-webhook-cert\") pod \"elastic-operator-7cf98dd5bf-wzjcb\" (UID: \"8eec4aeb-49af-4a60-abcd-aff7fcc250dc\") " pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.554991 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8eec4aeb-49af-4a60-abcd-aff7fcc250dc-apiservice-cert\") pod \"elastic-operator-7cf98dd5bf-wzjcb\" (UID: \"8eec4aeb-49af-4a60-abcd-aff7fcc250dc\") " pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.560221 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8eec4aeb-49af-4a60-abcd-aff7fcc250dc-webhook-cert\") pod \"elastic-operator-7cf98dd5bf-wzjcb\" (UID: \"8eec4aeb-49af-4a60-abcd-aff7fcc250dc\") " pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.562946 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8eec4aeb-49af-4a60-abcd-aff7fcc250dc-apiservice-cert\") pod \"elastic-operator-7cf98dd5bf-wzjcb\" (UID: \"8eec4aeb-49af-4a60-abcd-aff7fcc250dc\") " pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.595594 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t8ng\" (UniqueName: \"kubernetes.io/projected/8eec4aeb-49af-4a60-abcd-aff7fcc250dc-kube-api-access-6t8ng\") pod \"elastic-operator-7cf98dd5bf-wzjcb\" (UID: \"8eec4aeb-49af-4a60-abcd-aff7fcc250dc\") " pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.662696 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" event={"ID":"cbf032a0-16b4-4a9d-a85a-f0c2e9f5cfde","Type":"ContainerDied","Data":"7b3af9c309e204f2b7c91265008dd96e77b46813c2d201cf34a465eb1addf070"} Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.662739 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b3af9c309e204f2b7c91265008dd96e77b46813c2d201cf34a465eb1addf070" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.662709 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asdtxg" Dec 11 16:11:46 crc kubenswrapper[5120]: I1211 16:11:46.723528 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" Dec 11 16:11:47 crc kubenswrapper[5120]: I1211 16:11:47.214625 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7cf98dd5bf-wzjcb"] Dec 11 16:11:47 crc kubenswrapper[5120]: W1211 16:11:47.224234 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eec4aeb_49af_4a60_abcd_aff7fcc250dc.slice/crio-96fd2cd8bb047f826afd7084fdcd29ea6ec6ed3789575816e41a77682e67af9c WatchSource:0}: Error finding container 96fd2cd8bb047f826afd7084fdcd29ea6ec6ed3789575816e41a77682e67af9c: Status 404 returned error can't find the container with id 96fd2cd8bb047f826afd7084fdcd29ea6ec6ed3789575816e41a77682e67af9c Dec 11 16:11:47 crc kubenswrapper[5120]: I1211 16:11:47.674807 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" event={"ID":"8eec4aeb-49af-4a60-abcd-aff7fcc250dc","Type":"ContainerStarted","Data":"96fd2cd8bb047f826afd7084fdcd29ea6ec6ed3789575816e41a77682e67af9c"} Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.066076 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9"] Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.075258 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.077735 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.078002 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.078139 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-hpwkv\"" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.093548 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9"] Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.170753 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8e9f618-5763-4411-a628-75edfea1c008-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-7jcf9\" (UID: \"c8e9f618-5763-4411-a628-75edfea1c008\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.170871 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrsw6\" (UniqueName: \"kubernetes.io/projected/c8e9f618-5763-4411-a628-75edfea1c008-kube-api-access-lrsw6\") pod \"cert-manager-operator-controller-manager-64c74584c4-7jcf9\" (UID: \"c8e9f618-5763-4411-a628-75edfea1c008\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.273282 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8e9f618-5763-4411-a628-75edfea1c008-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-7jcf9\" (UID: \"c8e9f618-5763-4411-a628-75edfea1c008\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.273730 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lrsw6\" (UniqueName: \"kubernetes.io/projected/c8e9f618-5763-4411-a628-75edfea1c008-kube-api-access-lrsw6\") pod \"cert-manager-operator-controller-manager-64c74584c4-7jcf9\" (UID: \"c8e9f618-5763-4411-a628-75edfea1c008\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.273916 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8e9f618-5763-4411-a628-75edfea1c008-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-7jcf9\" (UID: \"c8e9f618-5763-4411-a628-75edfea1c008\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.310188 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrsw6\" (UniqueName: \"kubernetes.io/projected/c8e9f618-5763-4411-a628-75edfea1c008-kube-api-access-lrsw6\") pod \"cert-manager-operator-controller-manager-64c74584c4-7jcf9\" (UID: \"c8e9f618-5763-4411-a628-75edfea1c008\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" Dec 11 16:11:59 crc kubenswrapper[5120]: I1211 16:11:59.400540 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.282850 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9"] Dec 11 16:12:00 crc kubenswrapper[5120]: W1211 16:12:00.328183 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8e9f618_5763_4411_a628_75edfea1c008.slice/crio-7a28db6376e88d5d75d31ee4ab4d97defe9cf6f65769328a6a576d21d8f4ff10 WatchSource:0}: Error finding container 7a28db6376e88d5d75d31ee4ab4d97defe9cf6f65769328a6a576d21d8f4ff10: Status 404 returned error can't find the container with id 7a28db6376e88d5d75d31ee4ab4d97defe9cf6f65769328a6a576d21d8f4ff10 Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.757900 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-mw4h4" event={"ID":"c4801f96-9fe0-4763-8cd9-f5a68adfeeb9","Type":"ContainerStarted","Data":"1f5d0e71b584db7bcb580524cd643462e0b317ea108dcb5ff9a118c49a02c8af"} Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.759042 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.760602 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" event={"ID":"2484e4c7-6f94-4161-93d0-0f55b304286c","Type":"ContainerStarted","Data":"ec61063406a5dbef9469c259fab702957d4caa3880cff0d1456bcb2d41920769"} Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.762220 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" event={"ID":"046886b8-c2ac-4277-88e4-bd219b56d6b0","Type":"ContainerStarted","Data":"9a9f28c77fbacf9e07773beed2ba461b3589d831c51e54efa8af9518e1020a5d"} Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.762574 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.763974 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" event={"ID":"8eec4aeb-49af-4a60-abcd-aff7fcc250dc","Type":"ContainerStarted","Data":"4d54b68c429f846c013d3145ff4880be1deebb34f1a047d4bbdb18cda73deca2"} Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.765589 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" event={"ID":"c8e9f618-5763-4411-a628-75edfea1c008","Type":"ContainerStarted","Data":"7a28db6376e88d5d75d31ee4ab4d97defe9cf6f65769328a6a576d21d8f4ff10"} Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.767824 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-9dp5g" event={"ID":"405e4958-9400-4b75-baf7-ad3ad1361162","Type":"ContainerStarted","Data":"549870fcc135847a91b1ca727b7c586e4b1aac22035b91cef7eeb194b0d4dfb9"} Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.772324 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" event={"ID":"e02e8f19-f7bb-4dad-8b0d-1c821b085b6b","Type":"ContainerStarted","Data":"a1f3e31d78a914a90390a648afc6dd4c2ec1a1c79299f0963666825872bcfda4"} Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.775052 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-mw4h4" Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.793411 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-mw4h4" podStartSLOduration=3.851817626 podStartE2EDuration="20.793398566s" podCreationTimestamp="2025-12-11 16:11:40 +0000 UTC" firstStartedPulling="2025-12-11 16:11:43.111753438 +0000 UTC m=+652.366056769" lastFinishedPulling="2025-12-11 16:12:00.053334378 +0000 UTC m=+669.307637709" observedRunningTime="2025-12-11 16:12:00.787101485 +0000 UTC m=+670.041404816" watchObservedRunningTime="2025-12-11 16:12:00.793398566 +0000 UTC m=+670.047701897" Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.816172 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-xp5tp" podStartSLOduration=4.921698047 podStartE2EDuration="21.816136439s" podCreationTimestamp="2025-12-11 16:11:39 +0000 UTC" firstStartedPulling="2025-12-11 16:11:43.143549715 +0000 UTC m=+652.397853036" lastFinishedPulling="2025-12-11 16:12:00.037988097 +0000 UTC m=+669.292291428" observedRunningTime="2025-12-11 16:12:00.805336659 +0000 UTC m=+670.059639990" watchObservedRunningTime="2025-12-11 16:12:00.816136439 +0000 UTC m=+670.070439760" Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.870269 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-9dp5g" podStartSLOduration=4.871590993 podStartE2EDuration="21.870237883s" podCreationTimestamp="2025-12-11 16:11:39 +0000 UTC" firstStartedPulling="2025-12-11 16:11:43.072674869 +0000 UTC m=+652.326978200" lastFinishedPulling="2025-12-11 16:12:00.071321759 +0000 UTC m=+669.325625090" observedRunningTime="2025-12-11 16:12:00.866721774 +0000 UTC m=+670.121025105" watchObservedRunningTime="2025-12-11 16:12:00.870237883 +0000 UTC m=+670.124541214" Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.899279 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7cf98dd5bf-wzjcb" podStartSLOduration=2.091534851 podStartE2EDuration="14.899255757s" podCreationTimestamp="2025-12-11 16:11:46 +0000 UTC" firstStartedPulling="2025-12-11 16:11:47.229822256 +0000 UTC m=+656.484125587" lastFinishedPulling="2025-12-11 16:12:00.037543162 +0000 UTC m=+669.291846493" observedRunningTime="2025-12-11 16:12:00.898216335 +0000 UTC m=+670.152519656" watchObservedRunningTime="2025-12-11 16:12:00.899255757 +0000 UTC m=+670.153559108" Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.947881 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-58975f54f6-m5fnk" podStartSLOduration=4.93965033 podStartE2EDuration="21.947859489s" podCreationTimestamp="2025-12-11 16:11:39 +0000 UTC" firstStartedPulling="2025-12-11 16:11:43.089686157 +0000 UTC m=+652.343989488" lastFinishedPulling="2025-12-11 16:12:00.097895316 +0000 UTC m=+669.352198647" observedRunningTime="2025-12-11 16:12:00.939728038 +0000 UTC m=+670.194031369" watchObservedRunningTime="2025-12-11 16:12:00.947859489 +0000 UTC m=+670.202162820" Dec 11 16:12:00 crc kubenswrapper[5120]: I1211 16:12:00.985148 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" podStartSLOduration=4.046862186 podStartE2EDuration="20.985121885s" podCreationTimestamp="2025-12-11 16:11:40 +0000 UTC" firstStartedPulling="2025-12-11 16:11:43.111995425 +0000 UTC m=+652.366298756" lastFinishedPulling="2025-12-11 16:12:00.050255124 +0000 UTC m=+669.304558455" observedRunningTime="2025-12-11 16:12:00.980618534 +0000 UTC m=+670.234921865" watchObservedRunningTime="2025-12-11 16:12:00.985121885 +0000 UTC m=+670.239425216" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.791697 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.799263 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.805183 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.805208 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.805298 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.805540 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.805602 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.805692 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.805732 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-dndmv\"" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.805822 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.805851 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.811545 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.896909 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.897209 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.897577 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.897847 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.898042 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.898133 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.898294 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.898764 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.898887 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.900484 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.900642 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.901510 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.901711 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.901806 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/69934653-bd26-4a43-b097-8692e246cdfa-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:06 crc kubenswrapper[5120]: I1211 16:12:06.901890 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.003322 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.003371 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.003415 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.005784 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.005877 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.005928 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.005961 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.006023 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.006066 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/69934653-bd26-4a43-b097-8692e246cdfa-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.006111 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.006236 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.006516 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.006603 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.006670 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.006836 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.008652 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.011036 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.011501 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.014596 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/69934653-bd26-4a43-b097-8692e246cdfa-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.014666 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.015174 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.015601 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.016434 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.017244 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.017596 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.017889 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.019603 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.020597 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.022201 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.034494 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/69934653-bd26-4a43-b097-8692e246cdfa-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"69934653-bd26-4a43-b097-8692e246cdfa\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.120358 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.432568 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.570260 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" podUID="e502d0f9-d2f5-433f-ad5c-5353c996ba0e" containerName="registry" containerID="cri-o://61abd2db50eb2726a6fd59f5066c2c069871cac00ee6884254875da3b0f9a032" gracePeriod=30 Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.817845 5120 generic.go:358] "Generic (PLEG): container finished" podID="e502d0f9-d2f5-433f-ad5c-5353c996ba0e" containerID="61abd2db50eb2726a6fd59f5066c2c069871cac00ee6884254875da3b0f9a032" exitCode=0 Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.817943 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" event={"ID":"e502d0f9-d2f5-433f-ad5c-5353c996ba0e","Type":"ContainerDied","Data":"61abd2db50eb2726a6fd59f5066c2c069871cac00ee6884254875da3b0f9a032"} Dec 11 16:12:07 crc kubenswrapper[5120]: I1211 16:12:07.822225 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"69934653-bd26-4a43-b097-8692e246cdfa","Type":"ContainerStarted","Data":"6ff6beb73c49859889d41d7afe923329c70bd7cb07cfde2e5884155e54273b63"} Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.004812 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.038345 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpfz9\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-kube-api-access-vpfz9\") pod \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.038485 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.038507 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-tls\") pod \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.038530 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-bound-sa-token\") pod \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.038635 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-ca-trust-extracted\") pod \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.038656 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-certificates\") pod \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.038678 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-installation-pull-secrets\") pod \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.038727 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-trusted-ca\") pod \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\" (UID: \"e502d0f9-d2f5-433f-ad5c-5353c996ba0e\") " Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.040964 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e502d0f9-d2f5-433f-ad5c-5353c996ba0e" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.042417 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e502d0f9-d2f5-433f-ad5c-5353c996ba0e" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.045903 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e502d0f9-d2f5-433f-ad5c-5353c996ba0e" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.046074 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e502d0f9-d2f5-433f-ad5c-5353c996ba0e" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.048884 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-kube-api-access-vpfz9" (OuterVolumeSpecName: "kube-api-access-vpfz9") pod "e502d0f9-d2f5-433f-ad5c-5353c996ba0e" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e"). InnerVolumeSpecName "kube-api-access-vpfz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.053390 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "e502d0f9-d2f5-433f-ad5c-5353c996ba0e" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.075087 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e502d0f9-d2f5-433f-ad5c-5353c996ba0e" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.077358 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e502d0f9-d2f5-433f-ad5c-5353c996ba0e" (UID: "e502d0f9-d2f5-433f-ad5c-5353c996ba0e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.140093 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.140140 5120 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.140215 5120 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.140230 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.140243 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vpfz9\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-kube-api-access-vpfz9\") on node \"crc\" DevicePath \"\"" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.140254 5120 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.140266 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e502d0f9-d2f5-433f-ad5c-5353c996ba0e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.836909 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" event={"ID":"c8e9f618-5763-4411-a628-75edfea1c008","Type":"ContainerStarted","Data":"d7db4abf30bc62d3b54dce9f6e84bca66736075ad29a0d512ffe7cfe40c0f6fb"} Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.857245 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.857250 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-s2npb" event={"ID":"e502d0f9-d2f5-433f-ad5c-5353c996ba0e","Type":"ContainerDied","Data":"4fbe34ec792244e18e9f2e7be4734570c3a0ce36cc252aa14c88b45ea00f3808"} Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.857855 5120 scope.go:117] "RemoveContainer" containerID="61abd2db50eb2726a6fd59f5066c2c069871cac00ee6884254875da3b0f9a032" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.863851 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-7jcf9" podStartSLOduration=2.41778 podStartE2EDuration="10.863836357s" podCreationTimestamp="2025-12-11 16:11:59 +0000 UTC" firstStartedPulling="2025-12-11 16:12:00.331690104 +0000 UTC m=+669.585993435" lastFinishedPulling="2025-12-11 16:12:08.777746421 +0000 UTC m=+678.032049792" observedRunningTime="2025-12-11 16:12:09.858623087 +0000 UTC m=+679.112926418" watchObservedRunningTime="2025-12-11 16:12:09.863836357 +0000 UTC m=+679.118139678" Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.893094 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s2npb"] Dec 11 16:12:09 crc kubenswrapper[5120]: I1211 16:12:09.903887 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-s2npb"] Dec 11 16:12:11 crc kubenswrapper[5120]: I1211 16:12:11.035421 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e502d0f9-d2f5-433f-ad5c-5353c996ba0e" path="/var/lib/kubelet/pods/e502d0f9-d2f5-433f-ad5c-5353c996ba0e/volumes" Dec 11 16:12:12 crc kubenswrapper[5120]: I1211 16:12:12.783595 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-x5tz9" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.021857 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx"] Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.023233 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e502d0f9-d2f5-433f-ad5c-5353c996ba0e" containerName="registry" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.023254 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e502d0f9-d2f5-433f-ad5c-5353c996ba0e" containerName="registry" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.023349 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e502d0f9-d2f5-433f-ad5c-5353c996ba0e" containerName="registry" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.030409 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.030700 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx"] Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.031848 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.031975 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-4j6q8\"" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.032203 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.140636 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwmh\" (UniqueName: \"kubernetes.io/projected/951f8712-eecc-423e-a960-38c513139a87-kube-api-access-6hwmh\") pod \"cert-manager-cainjector-7dbf76d5c8-5jxgx\" (UID: \"951f8712-eecc-423e-a960-38c513139a87\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.140695 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/951f8712-eecc-423e-a960-38c513139a87-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-5jxgx\" (UID: \"951f8712-eecc-423e-a960-38c513139a87\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.244560 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hwmh\" (UniqueName: \"kubernetes.io/projected/951f8712-eecc-423e-a960-38c513139a87-kube-api-access-6hwmh\") pod \"cert-manager-cainjector-7dbf76d5c8-5jxgx\" (UID: \"951f8712-eecc-423e-a960-38c513139a87\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.244669 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/951f8712-eecc-423e-a960-38c513139a87-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-5jxgx\" (UID: \"951f8712-eecc-423e-a960-38c513139a87\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.263591 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hwmh\" (UniqueName: \"kubernetes.io/projected/951f8712-eecc-423e-a960-38c513139a87-kube-api-access-6hwmh\") pod \"cert-manager-cainjector-7dbf76d5c8-5jxgx\" (UID: \"951f8712-eecc-423e-a960-38c513139a87\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.274540 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/951f8712-eecc-423e-a960-38c513139a87-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-5jxgx\" (UID: \"951f8712-eecc-423e-a960-38c513139a87\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" Dec 11 16:12:16 crc kubenswrapper[5120]: I1211 16:12:16.397832 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" Dec 11 16:12:18 crc kubenswrapper[5120]: I1211 16:12:18.866226 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5"] Dec 11 16:12:18 crc kubenswrapper[5120]: I1211 16:12:18.890770 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5"] Dec 11 16:12:18 crc kubenswrapper[5120]: I1211 16:12:18.890910 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:18 crc kubenswrapper[5120]: I1211 16:12:18.892882 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-gbftg\"" Dec 11 16:12:18 crc kubenswrapper[5120]: I1211 16:12:18.983562 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8mjm\" (UniqueName: \"kubernetes.io/projected/25148860-4ad2-4043-a808-472f7ce0275d-kube-api-access-s8mjm\") pod \"cert-manager-webhook-7894b5b9b4-j6qm5\" (UID: \"25148860-4ad2-4043-a808-472f7ce0275d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:18 crc kubenswrapper[5120]: I1211 16:12:18.983639 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25148860-4ad2-4043-a808-472f7ce0275d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-j6qm5\" (UID: \"25148860-4ad2-4043-a808-472f7ce0275d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.085284 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8mjm\" (UniqueName: \"kubernetes.io/projected/25148860-4ad2-4043-a808-472f7ce0275d-kube-api-access-s8mjm\") pod \"cert-manager-webhook-7894b5b9b4-j6qm5\" (UID: \"25148860-4ad2-4043-a808-472f7ce0275d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.085341 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25148860-4ad2-4043-a808-472f7ce0275d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-j6qm5\" (UID: \"25148860-4ad2-4043-a808-472f7ce0275d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.144797 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8mjm\" (UniqueName: \"kubernetes.io/projected/25148860-4ad2-4043-a808-472f7ce0275d-kube-api-access-s8mjm\") pod \"cert-manager-webhook-7894b5b9b4-j6qm5\" (UID: \"25148860-4ad2-4043-a808-472f7ce0275d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.148761 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25148860-4ad2-4043-a808-472f7ce0275d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-j6qm5\" (UID: \"25148860-4ad2-4043-a808-472f7ce0275d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.219537 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.827676 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx"] Dec 11 16:12:19 crc kubenswrapper[5120]: W1211 16:12:19.834286 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod951f8712_eecc_423e_a960_38c513139a87.slice/crio-bcda24f2bf99f134f06b41b9cd8e55e5fc8c34ad80516cb555c552228fcbc6ca WatchSource:0}: Error finding container bcda24f2bf99f134f06b41b9cd8e55e5fc8c34ad80516cb555c552228fcbc6ca: Status 404 returned error can't find the container with id bcda24f2bf99f134f06b41b9cd8e55e5fc8c34ad80516cb555c552228fcbc6ca Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.888627 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5"] Dec 11 16:12:19 crc kubenswrapper[5120]: W1211 16:12:19.900530 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25148860_4ad2_4043_a808_472f7ce0275d.slice/crio-e10762aa16a20fa8fd80f0b50acb66ada0c3a4f5942213e15cc5c895bc4f4ae1 WatchSource:0}: Error finding container e10762aa16a20fa8fd80f0b50acb66ada0c3a4f5942213e15cc5c895bc4f4ae1: Status 404 returned error can't find the container with id e10762aa16a20fa8fd80f0b50acb66ada0c3a4f5942213e15cc5c895bc4f4ae1 Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.928139 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" event={"ID":"25148860-4ad2-4043-a808-472f7ce0275d","Type":"ContainerStarted","Data":"e10762aa16a20fa8fd80f0b50acb66ada0c3a4f5942213e15cc5c895bc4f4ae1"} Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.930033 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" event={"ID":"951f8712-eecc-423e-a960-38c513139a87","Type":"ContainerStarted","Data":"bcda24f2bf99f134f06b41b9cd8e55e5fc8c34ad80516cb555c552228fcbc6ca"} Dec 11 16:12:19 crc kubenswrapper[5120]: I1211 16:12:19.932097 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"69934653-bd26-4a43-b097-8692e246cdfa","Type":"ContainerStarted","Data":"b9576b12e8701aae02e6e43ab6b6de6d335b5d9f7db1f9c09631eb92ff0cad3d"} Dec 11 16:12:20 crc kubenswrapper[5120]: I1211 16:12:20.122445 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 16:12:20 crc kubenswrapper[5120]: I1211 16:12:20.162183 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 16:12:21 crc kubenswrapper[5120]: I1211 16:12:21.945925 5120 generic.go:358] "Generic (PLEG): container finished" podID="69934653-bd26-4a43-b097-8692e246cdfa" containerID="b9576b12e8701aae02e6e43ab6b6de6d335b5d9f7db1f9c09631eb92ff0cad3d" exitCode=0 Dec 11 16:12:21 crc kubenswrapper[5120]: I1211 16:12:21.945984 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"69934653-bd26-4a43-b097-8692e246cdfa","Type":"ContainerDied","Data":"b9576b12e8701aae02e6e43ab6b6de6d335b5d9f7db1f9c09631eb92ff0cad3d"} Dec 11 16:12:22 crc kubenswrapper[5120]: I1211 16:12:22.971035 5120 generic.go:358] "Generic (PLEG): container finished" podID="69934653-bd26-4a43-b097-8692e246cdfa" containerID="8b3e9708debc245ca6c67703e4c6817bb34e3c4d0139f91689c19094a27db04c" exitCode=0 Dec 11 16:12:22 crc kubenswrapper[5120]: I1211 16:12:22.971214 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"69934653-bd26-4a43-b097-8692e246cdfa","Type":"ContainerDied","Data":"8b3e9708debc245ca6c67703e4c6817bb34e3c4d0139f91689c19094a27db04c"} Dec 11 16:12:26 crc kubenswrapper[5120]: I1211 16:12:26.995342 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" event={"ID":"25148860-4ad2-4043-a808-472f7ce0275d","Type":"ContainerStarted","Data":"279766d62779f76438c570a71a52b24928ca552210af25f8880727615d54022f"} Dec 11 16:12:26 crc kubenswrapper[5120]: I1211 16:12:26.995991 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:26 crc kubenswrapper[5120]: I1211 16:12:26.997925 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" event={"ID":"951f8712-eecc-423e-a960-38c513139a87","Type":"ContainerStarted","Data":"201c16990323e33b0f989f27c7171f83498cbc1cf4e9bf963ae13048395dfc0c"} Dec 11 16:12:27 crc kubenswrapper[5120]: I1211 16:12:27.001494 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"69934653-bd26-4a43-b097-8692e246cdfa","Type":"ContainerStarted","Data":"c76a84b9fcd37e65733efa7590ddeab6b6f2acbcdbbaf67b81942b4cc3edd8ec"} Dec 11 16:12:27 crc kubenswrapper[5120]: I1211 16:12:27.001786 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:27 crc kubenswrapper[5120]: I1211 16:12:27.015309 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" podStartSLOduration=2.115069429 podStartE2EDuration="9.015285554s" podCreationTimestamp="2025-12-11 16:12:18 +0000 UTC" firstStartedPulling="2025-12-11 16:12:19.905856894 +0000 UTC m=+689.160160255" lastFinishedPulling="2025-12-11 16:12:26.806073049 +0000 UTC m=+696.060376380" observedRunningTime="2025-12-11 16:12:27.012556841 +0000 UTC m=+696.266860182" watchObservedRunningTime="2025-12-11 16:12:27.015285554 +0000 UTC m=+696.269588885" Dec 11 16:12:27 crc kubenswrapper[5120]: I1211 16:12:27.045552 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=8.900681959 podStartE2EDuration="21.045536403s" podCreationTimestamp="2025-12-11 16:12:06 +0000 UTC" firstStartedPulling="2025-12-11 16:12:07.459883597 +0000 UTC m=+676.714186928" lastFinishedPulling="2025-12-11 16:12:19.604738041 +0000 UTC m=+688.859041372" observedRunningTime="2025-12-11 16:12:27.043065807 +0000 UTC m=+696.297369138" watchObservedRunningTime="2025-12-11 16:12:27.045536403 +0000 UTC m=+696.299839734" Dec 11 16:12:27 crc kubenswrapper[5120]: I1211 16:12:27.069838 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-5jxgx" podStartSLOduration=4.097050708 podStartE2EDuration="11.069813403s" podCreationTimestamp="2025-12-11 16:12:16 +0000 UTC" firstStartedPulling="2025-12-11 16:12:19.847974966 +0000 UTC m=+689.102278327" lastFinishedPulling="2025-12-11 16:12:26.820737691 +0000 UTC m=+696.075041022" observedRunningTime="2025-12-11 16:12:27.062916338 +0000 UTC m=+696.317219679" watchObservedRunningTime="2025-12-11 16:12:27.069813403 +0000 UTC m=+696.324116744" Dec 11 16:12:32 crc kubenswrapper[5120]: I1211 16:12:32.399389 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-bvl8j"] Dec 11 16:12:32 crc kubenswrapper[5120]: I1211 16:12:32.881813 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-bvl8j"] Dec 11 16:12:32 crc kubenswrapper[5120]: I1211 16:12:32.881898 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-bvl8j" Dec 11 16:12:32 crc kubenswrapper[5120]: I1211 16:12:32.884385 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-dqjss\"" Dec 11 16:12:33 crc kubenswrapper[5120]: I1211 16:12:33.010510 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-j6qm5" Dec 11 16:12:33 crc kubenswrapper[5120]: I1211 16:12:33.076541 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/637d7c8e-c7ec-4bb2-81ef-1baf22d4265e-bound-sa-token\") pod \"cert-manager-858d87f86b-bvl8j\" (UID: \"637d7c8e-c7ec-4bb2-81ef-1baf22d4265e\") " pod="cert-manager/cert-manager-858d87f86b-bvl8j" Dec 11 16:12:33 crc kubenswrapper[5120]: I1211 16:12:33.076639 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfrx\" (UniqueName: \"kubernetes.io/projected/637d7c8e-c7ec-4bb2-81ef-1baf22d4265e-kube-api-access-5dfrx\") pod \"cert-manager-858d87f86b-bvl8j\" (UID: \"637d7c8e-c7ec-4bb2-81ef-1baf22d4265e\") " pod="cert-manager/cert-manager-858d87f86b-bvl8j" Dec 11 16:12:33 crc kubenswrapper[5120]: I1211 16:12:33.178811 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/637d7c8e-c7ec-4bb2-81ef-1baf22d4265e-bound-sa-token\") pod \"cert-manager-858d87f86b-bvl8j\" (UID: \"637d7c8e-c7ec-4bb2-81ef-1baf22d4265e\") " pod="cert-manager/cert-manager-858d87f86b-bvl8j" Dec 11 16:12:33 crc kubenswrapper[5120]: I1211 16:12:33.179138 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5dfrx\" (UniqueName: \"kubernetes.io/projected/637d7c8e-c7ec-4bb2-81ef-1baf22d4265e-kube-api-access-5dfrx\") pod \"cert-manager-858d87f86b-bvl8j\" (UID: \"637d7c8e-c7ec-4bb2-81ef-1baf22d4265e\") " pod="cert-manager/cert-manager-858d87f86b-bvl8j" Dec 11 16:12:33 crc kubenswrapper[5120]: I1211 16:12:33.200614 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/637d7c8e-c7ec-4bb2-81ef-1baf22d4265e-bound-sa-token\") pod \"cert-manager-858d87f86b-bvl8j\" (UID: \"637d7c8e-c7ec-4bb2-81ef-1baf22d4265e\") " pod="cert-manager/cert-manager-858d87f86b-bvl8j" Dec 11 16:12:33 crc kubenswrapper[5120]: I1211 16:12:33.201548 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dfrx\" (UniqueName: \"kubernetes.io/projected/637d7c8e-c7ec-4bb2-81ef-1baf22d4265e-kube-api-access-5dfrx\") pod \"cert-manager-858d87f86b-bvl8j\" (UID: \"637d7c8e-c7ec-4bb2-81ef-1baf22d4265e\") " pod="cert-manager/cert-manager-858d87f86b-bvl8j" Dec 11 16:12:33 crc kubenswrapper[5120]: I1211 16:12:33.205369 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-bvl8j" Dec 11 16:12:33 crc kubenswrapper[5120]: I1211 16:12:33.663436 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-bvl8j"] Dec 11 16:12:33 crc kubenswrapper[5120]: W1211 16:12:33.670335 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod637d7c8e_c7ec_4bb2_81ef_1baf22d4265e.slice/crio-efbf2994428d841cf2fc34927e68e691200b3e0dd80f4325c3e729fa455bb270 WatchSource:0}: Error finding container efbf2994428d841cf2fc34927e68e691200b3e0dd80f4325c3e729fa455bb270: Status 404 returned error can't find the container with id efbf2994428d841cf2fc34927e68e691200b3e0dd80f4325c3e729fa455bb270 Dec 11 16:12:34 crc kubenswrapper[5120]: I1211 16:12:34.055260 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-bvl8j" event={"ID":"637d7c8e-c7ec-4bb2-81ef-1baf22d4265e","Type":"ContainerStarted","Data":"efbf2994428d841cf2fc34927e68e691200b3e0dd80f4325c3e729fa455bb270"} Dec 11 16:12:38 crc kubenswrapper[5120]: I1211 16:12:38.121042 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="69934653-bd26-4a43-b097-8692e246cdfa" containerName="elasticsearch" probeResult="failure" output=< Dec 11 16:12:38 crc kubenswrapper[5120]: {"timestamp": "2025-12-11T16:12:38+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 11 16:12:38 crc kubenswrapper[5120]: > Dec 11 16:12:41 crc kubenswrapper[5120]: I1211 16:12:41.113519 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-bvl8j" event={"ID":"637d7c8e-c7ec-4bb2-81ef-1baf22d4265e","Type":"ContainerStarted","Data":"1181d5a4c9805029bc775025d922ec573673a348ecb06fe807fa5f46dcc065f5"} Dec 11 16:12:41 crc kubenswrapper[5120]: I1211 16:12:41.130674 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-bvl8j" podStartSLOduration=9.130647677 podStartE2EDuration="9.130647677s" podCreationTimestamp="2025-12-11 16:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 16:12:41.126936697 +0000 UTC m=+710.381240028" watchObservedRunningTime="2025-12-11 16:12:41.130647677 +0000 UTC m=+710.384951008" Dec 11 16:12:43 crc kubenswrapper[5120]: I1211 16:12:43.784526 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 16:12:56 crc kubenswrapper[5120]: I1211 16:12:56.805219 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.068446 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.070998 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.071368 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.071518 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.072448 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-mfv8j\"" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.072515 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.076085 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217310 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217399 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmsnr\" (UniqueName: \"kubernetes.io/projected/f899f634-d53a-4a67-b2b6-160787e8525c-kube-api-access-lmsnr\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217475 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217531 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217620 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217677 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217724 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217805 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217868 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.217968 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.218036 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.218093 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.218137 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.319564 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.319636 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.319662 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.319710 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.319753 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.319805 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.319837 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.319933 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.319980 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmsnr\" (UniqueName: \"kubernetes.io/projected/f899f634-d53a-4a67-b2b6-160787e8525c-kube-api-access-lmsnr\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.320023 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.320042 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.320086 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.320169 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.320210 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.320311 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.320472 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.320638 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.320831 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.321142 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.321285 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.321519 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.321642 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.327712 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.327845 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.328183 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.342727 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmsnr\" (UniqueName: \"kubernetes.io/projected/f899f634-d53a-4a67-b2b6-160787e8525c-kube-api-access-lmsnr\") pod \"service-telemetry-framework-index-1-build\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.396013 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:12:57 crc kubenswrapper[5120]: I1211 16:12:57.869161 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 16:12:58 crc kubenswrapper[5120]: I1211 16:12:58.230257 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"f899f634-d53a-4a67-b2b6-160787e8525c","Type":"ContainerStarted","Data":"206b4672323dd1ac388a40d55793b43f394fc414fca3dc958d1ca2b9e7eafb15"} Dec 11 16:13:03 crc kubenswrapper[5120]: I1211 16:13:03.267389 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"f899f634-d53a-4a67-b2b6-160787e8525c","Type":"ContainerStarted","Data":"0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be"} Dec 11 16:13:03 crc kubenswrapper[5120]: I1211 16:13:03.328723 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39970: no serving certificate available for the kubelet" Dec 11 16:13:04 crc kubenswrapper[5120]: I1211 16:13:04.360830 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.290479 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="f899f634-d53a-4a67-b2b6-160787e8525c" containerName="git-clone" containerID="cri-o://0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be" gracePeriod=30 Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.731983 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_f899f634-d53a-4a67-b2b6-160787e8525c/git-clone/0.log" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.732061 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.848896 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-buildcachedir\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849018 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-root\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849129 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-run\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849224 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-node-pullsecrets\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849294 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-ca-bundles\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849336 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-system-configs\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849423 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849443 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-push\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849489 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849569 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-build-blob-cache\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849592 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-buildworkdir\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849615 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849660 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-proxy-ca-bundles\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849739 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmsnr\" (UniqueName: \"kubernetes.io/projected/f899f634-d53a-4a67-b2b6-160787e8525c-kube-api-access-lmsnr\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849764 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-pull\") pod \"f899f634-d53a-4a67-b2b6-160787e8525c\" (UID: \"f899f634-d53a-4a67-b2b6-160787e8525c\") " Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.849828 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.850014 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.850772 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.850840 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.850852 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.850862 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.850871 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f899f634-d53a-4a67-b2b6-160787e8525c-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.850934 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.851258 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.851706 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.851753 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.852001 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.856648 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-pull" (OuterVolumeSpecName: "builder-dockercfg-mfv8j-pull") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "builder-dockercfg-mfv8j-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.856957 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-push" (OuterVolumeSpecName: "builder-dockercfg-mfv8j-push") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "builder-dockercfg-mfv8j-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.857239 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f899f634-d53a-4a67-b2b6-160787e8525c-kube-api-access-lmsnr" (OuterVolumeSpecName: "kube-api-access-lmsnr") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "kube-api-access-lmsnr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.857804 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "f899f634-d53a-4a67-b2b6-160787e8525c" (UID: "f899f634-d53a-4a67-b2b6-160787e8525c"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.952675 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lmsnr\" (UniqueName: \"kubernetes.io/projected/f899f634-d53a-4a67-b2b6-160787e8525c-kube-api-access-lmsnr\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.952719 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-pull\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.952732 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.952745 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.952755 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-builder-dockercfg-mfv8j-push\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.952766 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f899f634-d53a-4a67-b2b6-160787e8525c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.952779 5120 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/f899f634-d53a-4a67-b2b6-160787e8525c-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:05 crc kubenswrapper[5120]: I1211 16:13:05.952791 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f899f634-d53a-4a67-b2b6-160787e8525c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.300641 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_f899f634-d53a-4a67-b2b6-160787e8525c/git-clone/0.log" Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.300695 5120 generic.go:358] "Generic (PLEG): container finished" podID="f899f634-d53a-4a67-b2b6-160787e8525c" containerID="0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be" exitCode=1 Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.300790 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"f899f634-d53a-4a67-b2b6-160787e8525c","Type":"ContainerDied","Data":"0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be"} Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.300827 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"f899f634-d53a-4a67-b2b6-160787e8525c","Type":"ContainerDied","Data":"206b4672323dd1ac388a40d55793b43f394fc414fca3dc958d1ca2b9e7eafb15"} Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.300836 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.300849 5120 scope.go:117] "RemoveContainer" containerID="0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be" Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.336071 5120 scope.go:117] "RemoveContainer" containerID="0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be" Dec 11 16:13:06 crc kubenswrapper[5120]: E1211 16:13:06.337103 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be\": container with ID starting with 0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be not found: ID does not exist" containerID="0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be" Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.337342 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be"} err="failed to get container status \"0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be\": rpc error: code = NotFound desc = could not find container \"0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be\": container with ID starting with 0045e06634516bae82e178ff73217759ebf16d2f104d5847d1e7d403096441be not found: ID does not exist" Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.355247 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 16:13:06 crc kubenswrapper[5120]: I1211 16:13:06.360489 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 16:13:07 crc kubenswrapper[5120]: I1211 16:13:07.032596 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f899f634-d53a-4a67-b2b6-160787e8525c" path="/var/lib/kubelet/pods/f899f634-d53a-4a67-b2b6-160787e8525c/volumes" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.205444 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xtwqr"] Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.207115 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f899f634-d53a-4a67-b2b6-160787e8525c" containerName="git-clone" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.207140 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f899f634-d53a-4a67-b2b6-160787e8525c" containerName="git-clone" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.207330 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f899f634-d53a-4a67-b2b6-160787e8525c" containerName="git-clone" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.237950 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtwqr"] Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.238180 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.356170 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-utilities\") pod \"redhat-operators-xtwqr\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.356345 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-catalog-content\") pod \"redhat-operators-xtwqr\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.356468 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfzpj\" (UniqueName: \"kubernetes.io/projected/477943e4-56b7-40a5-9185-399fee3c53d4-kube-api-access-mfzpj\") pod \"redhat-operators-xtwqr\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.458126 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mfzpj\" (UniqueName: \"kubernetes.io/projected/477943e4-56b7-40a5-9185-399fee3c53d4-kube-api-access-mfzpj\") pod \"redhat-operators-xtwqr\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.458228 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-utilities\") pod \"redhat-operators-xtwqr\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.458357 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-catalog-content\") pod \"redhat-operators-xtwqr\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.458999 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-utilities\") pod \"redhat-operators-xtwqr\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.459118 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-catalog-content\") pod \"redhat-operators-xtwqr\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.482118 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfzpj\" (UniqueName: \"kubernetes.io/projected/477943e4-56b7-40a5-9185-399fee3c53d4-kube-api-access-mfzpj\") pod \"redhat-operators-xtwqr\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.561391 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:13 crc kubenswrapper[5120]: I1211 16:13:13.773440 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtwqr"] Dec 11 16:13:14 crc kubenswrapper[5120]: I1211 16:13:14.381134 5120 generic.go:358] "Generic (PLEG): container finished" podID="477943e4-56b7-40a5-9185-399fee3c53d4" containerID="38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e" exitCode=0 Dec 11 16:13:14 crc kubenswrapper[5120]: I1211 16:13:14.381308 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtwqr" event={"ID":"477943e4-56b7-40a5-9185-399fee3c53d4","Type":"ContainerDied","Data":"38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e"} Dec 11 16:13:14 crc kubenswrapper[5120]: I1211 16:13:14.381556 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtwqr" event={"ID":"477943e4-56b7-40a5-9185-399fee3c53d4","Type":"ContainerStarted","Data":"30347110f104ad6e44a5dcf008c8557b29497503da53e01ba38c66ac3a3a155f"} Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.391873 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtwqr" event={"ID":"477943e4-56b7-40a5-9185-399fee3c53d4","Type":"ContainerStarted","Data":"ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0"} Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.860531 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.867195 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.870189 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-global-ca\"" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.870439 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.872085 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-ca\"" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.872860 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-mfv8j\"" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.873531 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-2-sys-config\"" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.893476 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.991744 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.991798 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.991836 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.991893 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm9rt\" (UniqueName: \"kubernetes.io/projected/e78d009f-5e7e-4b01-af6b-da9313dcc57f-kube-api-access-wm9rt\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.991927 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.992066 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.992181 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.992200 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.992226 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.992249 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.992266 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.992319 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:15 crc kubenswrapper[5120]: I1211 16:13:15.992388 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.093789 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.093859 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.093941 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094022 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094050 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094096 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094140 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wm9rt\" (UniqueName: \"kubernetes.io/projected/e78d009f-5e7e-4b01-af6b-da9313dcc57f-kube-api-access-wm9rt\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094393 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094414 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094459 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094520 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094531 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094557 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094469 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094652 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094672 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094702 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094739 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.094904 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.095008 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.095140 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.096223 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.101487 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.102624 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.107450 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.123922 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm9rt\" (UniqueName: \"kubernetes.io/projected/e78d009f-5e7e-4b01-af6b-da9313dcc57f-kube-api-access-wm9rt\") pod \"service-telemetry-framework-index-2-build\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.246845 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.401116 5120 generic.go:358] "Generic (PLEG): container finished" podID="477943e4-56b7-40a5-9185-399fee3c53d4" containerID="ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0" exitCode=0 Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.402664 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtwqr" event={"ID":"477943e4-56b7-40a5-9185-399fee3c53d4","Type":"ContainerDied","Data":"ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0"} Dec 11 16:13:16 crc kubenswrapper[5120]: I1211 16:13:16.751023 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 16:13:17 crc kubenswrapper[5120]: I1211 16:13:17.410216 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"e78d009f-5e7e-4b01-af6b-da9313dcc57f","Type":"ContainerStarted","Data":"a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293"} Dec 11 16:13:17 crc kubenswrapper[5120]: I1211 16:13:17.411961 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"e78d009f-5e7e-4b01-af6b-da9313dcc57f","Type":"ContainerStarted","Data":"b116191e7562b2c3597ae691f997c565ad549446b50372382f415fb2682fe373"} Dec 11 16:13:17 crc kubenswrapper[5120]: I1211 16:13:17.413509 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtwqr" event={"ID":"477943e4-56b7-40a5-9185-399fee3c53d4","Type":"ContainerStarted","Data":"639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b"} Dec 11 16:13:17 crc kubenswrapper[5120]: I1211 16:13:17.458213 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xtwqr" podStartSLOduration=3.642535868 podStartE2EDuration="4.458196028s" podCreationTimestamp="2025-12-11 16:13:13 +0000 UTC" firstStartedPulling="2025-12-11 16:13:14.381943191 +0000 UTC m=+743.636246522" lastFinishedPulling="2025-12-11 16:13:15.197603341 +0000 UTC m=+744.451906682" observedRunningTime="2025-12-11 16:13:17.456131201 +0000 UTC m=+746.710434572" watchObservedRunningTime="2025-12-11 16:13:17.458196028 +0000 UTC m=+746.712499349" Dec 11 16:13:17 crc kubenswrapper[5120]: I1211 16:13:17.464447 5120 ???:1] "http: TLS handshake error from 192.168.126.11:52666: no serving certificate available for the kubelet" Dec 11 16:13:18 crc kubenswrapper[5120]: I1211 16:13:18.489726 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.424015 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="e78d009f-5e7e-4b01-af6b-da9313dcc57f" containerName="git-clone" containerID="cri-o://a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293" gracePeriod=30 Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.851927 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_e78d009f-5e7e-4b01-af6b-da9313dcc57f/git-clone/0.log" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.852007 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.959377 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-ca-bundles\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.959523 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildworkdir\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.959589 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-blob-cache\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.959663 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm9rt\" (UniqueName: \"kubernetes.io/projected/e78d009f-5e7e-4b01-af6b-da9313dcc57f-kube-api-access-wm9rt\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.959724 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-push\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.959759 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-root\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.959882 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.959942 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-proxy-ca-bundles\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960036 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-node-pullsecrets\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960127 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-system-configs\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960162 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960190 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildcachedir\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960255 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960336 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-pull\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960377 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-run\") pod \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\" (UID: \"e78d009f-5e7e-4b01-af6b-da9313dcc57f\") " Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960502 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960572 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960598 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960793 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960829 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.961002 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.961021 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.961044 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.961056 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.961073 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.961084 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.961094 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.960999 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.961110 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.967031 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-pull" (OuterVolumeSpecName: "builder-dockercfg-mfv8j-pull") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "builder-dockercfg-mfv8j-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.967030 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78d009f-5e7e-4b01-af6b-da9313dcc57f-kube-api-access-wm9rt" (OuterVolumeSpecName: "kube-api-access-wm9rt") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "kube-api-access-wm9rt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.967075 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:19 crc kubenswrapper[5120]: I1211 16:13:19.972139 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-push" (OuterVolumeSpecName: "builder-dockercfg-mfv8j-push") pod "e78d009f-5e7e-4b01-af6b-da9313dcc57f" (UID: "e78d009f-5e7e-4b01-af6b-da9313dcc57f"). InnerVolumeSpecName "builder-dockercfg-mfv8j-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.062050 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-pull\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.062533 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e78d009f-5e7e-4b01-af6b-da9313dcc57f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.062543 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e78d009f-5e7e-4b01-af6b-da9313dcc57f-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.062552 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wm9rt\" (UniqueName: \"kubernetes.io/projected/e78d009f-5e7e-4b01-af6b-da9313dcc57f-kube-api-access-wm9rt\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.062561 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-builder-dockercfg-mfv8j-push\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.062571 5120 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e78d009f-5e7e-4b01-af6b-da9313dcc57f-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.431647 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_e78d009f-5e7e-4b01-af6b-da9313dcc57f/git-clone/0.log" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.431725 5120 generic.go:358] "Generic (PLEG): container finished" podID="e78d009f-5e7e-4b01-af6b-da9313dcc57f" containerID="a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293" exitCode=1 Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.432017 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"e78d009f-5e7e-4b01-af6b-da9313dcc57f","Type":"ContainerDied","Data":"a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293"} Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.432058 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"e78d009f-5e7e-4b01-af6b-da9313dcc57f","Type":"ContainerDied","Data":"b116191e7562b2c3597ae691f997c565ad549446b50372382f415fb2682fe373"} Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.432086 5120 scope.go:117] "RemoveContainer" containerID="a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.432323 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.455236 5120 scope.go:117] "RemoveContainer" containerID="a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293" Dec 11 16:13:20 crc kubenswrapper[5120]: E1211 16:13:20.455782 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293\": container with ID starting with a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293 not found: ID does not exist" containerID="a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.455986 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293"} err="failed to get container status \"a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293\": rpc error: code = NotFound desc = could not find container \"a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293\": container with ID starting with a63c794aeb657187eb11ee0aefe869a73bc6767210566b780c61ed6ddbaa9293 not found: ID does not exist" Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.481600 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 16:13:20 crc kubenswrapper[5120]: I1211 16:13:20.485467 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Dec 11 16:13:21 crc kubenswrapper[5120]: I1211 16:13:21.029850 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78d009f-5e7e-4b01-af6b-da9313dcc57f" path="/var/lib/kubelet/pods/e78d009f-5e7e-4b01-af6b-da9313dcc57f/volumes" Dec 11 16:13:23 crc kubenswrapper[5120]: I1211 16:13:23.562308 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:23 crc kubenswrapper[5120]: I1211 16:13:23.562706 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:23 crc kubenswrapper[5120]: I1211 16:13:23.622813 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:24 crc kubenswrapper[5120]: I1211 16:13:24.510086 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:24 crc kubenswrapper[5120]: I1211 16:13:24.559672 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtwqr"] Dec 11 16:13:26 crc kubenswrapper[5120]: I1211 16:13:26.478194 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xtwqr" podUID="477943e4-56b7-40a5-9185-399fee3c53d4" containerName="registry-server" containerID="cri-o://639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b" gracePeriod=2 Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.213308 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.279943 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-utilities\") pod \"477943e4-56b7-40a5-9185-399fee3c53d4\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.280022 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzpj\" (UniqueName: \"kubernetes.io/projected/477943e4-56b7-40a5-9185-399fee3c53d4-kube-api-access-mfzpj\") pod \"477943e4-56b7-40a5-9185-399fee3c53d4\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.280067 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-catalog-content\") pod \"477943e4-56b7-40a5-9185-399fee3c53d4\" (UID: \"477943e4-56b7-40a5-9185-399fee3c53d4\") " Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.282114 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-utilities" (OuterVolumeSpecName: "utilities") pod "477943e4-56b7-40a5-9185-399fee3c53d4" (UID: "477943e4-56b7-40a5-9185-399fee3c53d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.289300 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477943e4-56b7-40a5-9185-399fee3c53d4-kube-api-access-mfzpj" (OuterVolumeSpecName: "kube-api-access-mfzpj") pod "477943e4-56b7-40a5-9185-399fee3c53d4" (UID: "477943e4-56b7-40a5-9185-399fee3c53d4"). InnerVolumeSpecName "kube-api-access-mfzpj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.376894 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "477943e4-56b7-40a5-9185-399fee3c53d4" (UID: "477943e4-56b7-40a5-9185-399fee3c53d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.382796 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.382831 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzpj\" (UniqueName: \"kubernetes.io/projected/477943e4-56b7-40a5-9185-399fee3c53d4-kube-api-access-mfzpj\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.382844 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477943e4-56b7-40a5-9185-399fee3c53d4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.496374 5120 generic.go:358] "Generic (PLEG): container finished" podID="477943e4-56b7-40a5-9185-399fee3c53d4" containerID="639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b" exitCode=0 Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.496498 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtwqr" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.496595 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtwqr" event={"ID":"477943e4-56b7-40a5-9185-399fee3c53d4","Type":"ContainerDied","Data":"639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b"} Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.496690 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtwqr" event={"ID":"477943e4-56b7-40a5-9185-399fee3c53d4","Type":"ContainerDied","Data":"30347110f104ad6e44a5dcf008c8557b29497503da53e01ba38c66ac3a3a155f"} Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.496733 5120 scope.go:117] "RemoveContainer" containerID="639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.522400 5120 scope.go:117] "RemoveContainer" containerID="ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.532471 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtwqr"] Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.544962 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xtwqr"] Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.563858 5120 scope.go:117] "RemoveContainer" containerID="38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.592495 5120 scope.go:117] "RemoveContainer" containerID="639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b" Dec 11 16:13:28 crc kubenswrapper[5120]: E1211 16:13:28.593257 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b\": container with ID starting with 639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b not found: ID does not exist" containerID="639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.593313 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b"} err="failed to get container status \"639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b\": rpc error: code = NotFound desc = could not find container \"639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b\": container with ID starting with 639f0fe208a009931221446a26cddda17614423a8503ae9af48d65a64741d95b not found: ID does not exist" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.593347 5120 scope.go:117] "RemoveContainer" containerID="ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0" Dec 11 16:13:28 crc kubenswrapper[5120]: E1211 16:13:28.593959 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0\": container with ID starting with ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0 not found: ID does not exist" containerID="ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.594029 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0"} err="failed to get container status \"ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0\": rpc error: code = NotFound desc = could not find container \"ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0\": container with ID starting with ffaba31317981a6cec1ae908bd8c3311ec7f045f88488f9495d5dabcdad2c8a0 not found: ID does not exist" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.594071 5120 scope.go:117] "RemoveContainer" containerID="38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e" Dec 11 16:13:28 crc kubenswrapper[5120]: E1211 16:13:28.594551 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e\": container with ID starting with 38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e not found: ID does not exist" containerID="38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e" Dec 11 16:13:28 crc kubenswrapper[5120]: I1211 16:13:28.594597 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e"} err="failed to get container status \"38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e\": rpc error: code = NotFound desc = could not find container \"38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e\": container with ID starting with 38e6b6e5521e9dd153edfcbd31c943f40667a31929bf90daf6bcba86a549a42e not found: ID does not exist" Dec 11 16:13:29 crc kubenswrapper[5120]: I1211 16:13:29.034437 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477943e4-56b7-40a5-9185-399fee3c53d4" path="/var/lib/kubelet/pods/477943e4-56b7-40a5-9185-399fee3c53d4/volumes" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.004250 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005545 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e78d009f-5e7e-4b01-af6b-da9313dcc57f" containerName="git-clone" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005570 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78d009f-5e7e-4b01-af6b-da9313dcc57f" containerName="git-clone" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005590 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="477943e4-56b7-40a5-9185-399fee3c53d4" containerName="extract-utilities" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005627 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="477943e4-56b7-40a5-9185-399fee3c53d4" containerName="extract-utilities" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005664 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="477943e4-56b7-40a5-9185-399fee3c53d4" containerName="registry-server" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005679 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="477943e4-56b7-40a5-9185-399fee3c53d4" containerName="registry-server" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005714 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="477943e4-56b7-40a5-9185-399fee3c53d4" containerName="extract-content" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005727 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="477943e4-56b7-40a5-9185-399fee3c53d4" containerName="extract-content" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005899 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="477943e4-56b7-40a5-9185-399fee3c53d4" containerName="registry-server" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.005927 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e78d009f-5e7e-4b01-af6b-da9313dcc57f" containerName="git-clone" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.017636 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.021033 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-global-ca\"" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.021352 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-mfv8j\"" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.021535 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-ca\"" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.022058 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-3-sys-config\"" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.028231 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.046915 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.106829 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.106898 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107073 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwdwh\" (UniqueName: \"kubernetes.io/projected/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-kube-api-access-nwdwh\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107166 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107193 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107224 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107262 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107315 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107368 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107395 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107472 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107512 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.107548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209121 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209219 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209263 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209314 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nwdwh\" (UniqueName: \"kubernetes.io/projected/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-kube-api-access-nwdwh\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209342 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209365 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209390 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209423 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209460 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209498 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209523 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209566 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.209595 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.210120 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.210336 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.211132 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.211244 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.211345 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.211374 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.211348 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.211447 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.212563 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.215682 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.216284 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.218678 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.233698 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwdwh\" (UniqueName: \"kubernetes.io/projected/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-kube-api-access-nwdwh\") pod \"service-telemetry-framework-index-3-build\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.344636 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:30 crc kubenswrapper[5120]: I1211 16:13:30.592509 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 16:13:31 crc kubenswrapper[5120]: I1211 16:13:31.527584 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"c42f56cb-e80f-43bf-b2bf-6592dcd95e64","Type":"ContainerStarted","Data":"c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58"} Dec 11 16:13:31 crc kubenswrapper[5120]: I1211 16:13:31.528042 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"c42f56cb-e80f-43bf-b2bf-6592dcd95e64","Type":"ContainerStarted","Data":"e0220fb3a6d72e67308943e3f2a9319899ba77f73128d14256ff83422cab6f34"} Dec 11 16:13:31 crc kubenswrapper[5120]: I1211 16:13:31.597557 5120 ???:1] "http: TLS handshake error from 192.168.126.11:47502: no serving certificate available for the kubelet" Dec 11 16:13:32 crc kubenswrapper[5120]: I1211 16:13:32.630422 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 16:13:33 crc kubenswrapper[5120]: I1211 16:13:33.547502 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-3-build" podUID="c42f56cb-e80f-43bf-b2bf-6592dcd95e64" containerName="git-clone" containerID="cri-o://c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58" gracePeriod=30 Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.077899 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_c42f56cb-e80f-43bf-b2bf-6592dcd95e64/git-clone/0.log" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.078242 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173228 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildworkdir\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173275 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173293 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildcachedir\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173335 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-ca-bundles\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173378 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-pull\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173402 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwdwh\" (UniqueName: \"kubernetes.io/projected/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-kube-api-access-nwdwh\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173434 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-node-pullsecrets\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173475 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-system-configs\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173498 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-blob-cache\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173518 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-proxy-ca-bundles\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173539 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-run\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173577 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-push\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.173627 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-root\") pod \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\" (UID: \"c42f56cb-e80f-43bf-b2bf-6592dcd95e64\") " Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.174355 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.174373 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.175224 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.175326 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.175358 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.175866 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.176122 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.176363 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.176517 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.181890 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-kube-api-access-nwdwh" (OuterVolumeSpecName: "kube-api-access-nwdwh") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "kube-api-access-nwdwh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.182688 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-push" (OuterVolumeSpecName: "builder-dockercfg-mfv8j-push") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "builder-dockercfg-mfv8j-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.183707 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-pull" (OuterVolumeSpecName: "builder-dockercfg-mfv8j-pull") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "builder-dockercfg-mfv8j-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.184377 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "c42f56cb-e80f-43bf-b2bf-6592dcd95e64" (UID: "c42f56cb-e80f-43bf-b2bf-6592dcd95e64"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.275823 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.275900 5120 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.275926 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.275948 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.275969 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-pull\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.275990 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nwdwh\" (UniqueName: \"kubernetes.io/projected/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-kube-api-access-nwdwh\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.276007 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.276051 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.276069 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.276087 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.276105 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.276125 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-builder-dockercfg-mfv8j-push\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.276172 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c42f56cb-e80f-43bf-b2bf-6592dcd95e64-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.559112 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_c42f56cb-e80f-43bf-b2bf-6592dcd95e64/git-clone/0.log" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.559258 5120 generic.go:358] "Generic (PLEG): container finished" podID="c42f56cb-e80f-43bf-b2bf-6592dcd95e64" containerID="c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58" exitCode=1 Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.559303 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"c42f56cb-e80f-43bf-b2bf-6592dcd95e64","Type":"ContainerDied","Data":"c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58"} Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.559344 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"c42f56cb-e80f-43bf-b2bf-6592dcd95e64","Type":"ContainerDied","Data":"e0220fb3a6d72e67308943e3f2a9319899ba77f73128d14256ff83422cab6f34"} Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.559373 5120 scope.go:117] "RemoveContainer" containerID="c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.559403 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.581883 5120 scope.go:117] "RemoveContainer" containerID="c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58" Dec 11 16:13:34 crc kubenswrapper[5120]: E1211 16:13:34.583086 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58\": container with ID starting with c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58 not found: ID does not exist" containerID="c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.583203 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58"} err="failed to get container status \"c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58\": rpc error: code = NotFound desc = could not find container \"c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58\": container with ID starting with c81cce59c372a097cd8639db6a6b0a5b778d896455c7cf2b7e2850ce0371fb58 not found: ID does not exist" Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.617782 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 16:13:34 crc kubenswrapper[5120]: I1211 16:13:34.631245 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Dec 11 16:13:35 crc kubenswrapper[5120]: I1211 16:13:35.036859 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c42f56cb-e80f-43bf-b2bf-6592dcd95e64" path="/var/lib/kubelet/pods/c42f56cb-e80f-43bf-b2bf-6592dcd95e64/volumes" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.134983 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.136189 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c42f56cb-e80f-43bf-b2bf-6592dcd95e64" containerName="git-clone" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.136204 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42f56cb-e80f-43bf-b2bf-6592dcd95e64" containerName="git-clone" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.136297 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="c42f56cb-e80f-43bf-b2bf-6592dcd95e64" containerName="git-clone" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.153142 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.153287 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.156134 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.158725 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-ca\"" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.158976 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-sys-config\"" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.159019 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-mfv8j\"" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.159366 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-4-global-ca\"" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.229340 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.229625 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.229762 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.229872 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.230038 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.230127 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.230186 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.230274 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.230338 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.230375 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.230405 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.230434 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.230468 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkrhk\" (UniqueName: \"kubernetes.io/projected/89548461-1f67-4f23-bd0e-fd26e55e73b1-kube-api-access-dkrhk\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.331536 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.331591 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.331775 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.332022 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.332069 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-root\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.332095 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.332143 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-run\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.332192 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-node-pullsecrets\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.332172 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dkrhk\" (UniqueName: \"kubernetes.io/projected/89548461-1f67-4f23-bd0e-fd26e55e73b1-kube-api-access-dkrhk\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.332262 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.332295 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.332357 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.333052 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-system-configs\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.333320 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.333382 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.333426 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.333454 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.333525 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildcachedir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.333763 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-blob-cache\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.333769 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.334185 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildworkdir\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.334902 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-ca-bundles\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.338325 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-push\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.339042 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.345125 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-pull\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.348331 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkrhk\" (UniqueName: \"kubernetes.io/projected/89548461-1f67-4f23-bd0e-fd26e55e73b1-kube-api-access-dkrhk\") pod \"service-telemetry-framework-index-4-build\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.494131 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:44 crc kubenswrapper[5120]: I1211 16:13:44.909010 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 16:13:44 crc kubenswrapper[5120]: W1211 16:13:44.910693 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89548461_1f67_4f23_bd0e_fd26e55e73b1.slice/crio-d335c9c826806065809d469447cab32ff3700d7c686bbe68c06b952178d4da83 WatchSource:0}: Error finding container d335c9c826806065809d469447cab32ff3700d7c686bbe68c06b952178d4da83: Status 404 returned error can't find the container with id d335c9c826806065809d469447cab32ff3700d7c686bbe68c06b952178d4da83 Dec 11 16:13:45 crc kubenswrapper[5120]: I1211 16:13:45.634949 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"89548461-1f67-4f23-bd0e-fd26e55e73b1","Type":"ContainerStarted","Data":"dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9"} Dec 11 16:13:45 crc kubenswrapper[5120]: I1211 16:13:45.634998 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"89548461-1f67-4f23-bd0e-fd26e55e73b1","Type":"ContainerStarted","Data":"d335c9c826806065809d469447cab32ff3700d7c686bbe68c06b952178d4da83"} Dec 11 16:13:45 crc kubenswrapper[5120]: I1211 16:13:45.676142 5120 ???:1] "http: TLS handshake error from 192.168.126.11:50230: no serving certificate available for the kubelet" Dec 11 16:13:46 crc kubenswrapper[5120]: I1211 16:13:46.706343 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 16:13:47 crc kubenswrapper[5120]: I1211 16:13:47.650363 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-4-build" podUID="89548461-1f67-4f23-bd0e-fd26e55e73b1" containerName="git-clone" containerID="cri-o://dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9" gracePeriod=30 Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.196723 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-c4fnz"] Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.318480 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-c4fnz"] Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.318661 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-c4fnz" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.324306 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-m5kq6\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.445103 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c599\" (UniqueName: \"kubernetes.io/projected/bd2e2674-1f6d-485e-93f4-33023376b5e0-kube-api-access-2c599\") pod \"infrawatch-operators-c4fnz\" (UID: \"bd2e2674-1f6d-485e-93f4-33023376b5e0\") " pod="service-telemetry/infrawatch-operators-c4fnz" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.547995 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2c599\" (UniqueName: \"kubernetes.io/projected/bd2e2674-1f6d-485e-93f4-33023376b5e0-kube-api-access-2c599\") pod \"infrawatch-operators-c4fnz\" (UID: \"bd2e2674-1f6d-485e-93f4-33023376b5e0\") " pod="service-telemetry/infrawatch-operators-c4fnz" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.565976 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c599\" (UniqueName: \"kubernetes.io/projected/bd2e2674-1f6d-485e-93f4-33023376b5e0-kube-api-access-2c599\") pod \"infrawatch-operators-c4fnz\" (UID: \"bd2e2674-1f6d-485e-93f4-33023376b5e0\") " pod="service-telemetry/infrawatch-operators-c4fnz" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.610767 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_89548461-1f67-4f23-bd0e-fd26e55e73b1/git-clone/0.log" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.611128 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.646681 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-c4fnz" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.660872 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-4-build_89548461-1f67-4f23-bd0e-fd26e55e73b1/git-clone/0.log" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.660913 5120 generic.go:358] "Generic (PLEG): container finished" podID="89548461-1f67-4f23-bd0e-fd26e55e73b1" containerID="dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9" exitCode=1 Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.661034 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"89548461-1f67-4f23-bd0e-fd26e55e73b1","Type":"ContainerDied","Data":"dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9"} Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.661058 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-4-build" event={"ID":"89548461-1f67-4f23-bd0e-fd26e55e73b1","Type":"ContainerDied","Data":"d335c9c826806065809d469447cab32ff3700d7c686bbe68c06b952178d4da83"} Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.661075 5120 scope.go:117] "RemoveContainer" containerID="dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.661176 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-4-build" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.707115 5120 scope.go:117] "RemoveContainer" containerID="dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9" Dec 11 16:13:48 crc kubenswrapper[5120]: E1211 16:13:48.707581 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9\": container with ID starting with dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9 not found: ID does not exist" containerID="dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.707610 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9"} err="failed to get container status \"dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9\": rpc error: code = NotFound desc = could not find container \"dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9\": container with ID starting with dc4b1cc523b9cb9b11dee64c5fe2625e19007ac327d0cca831ab785644a40ae9 not found: ID does not exist" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.753955 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-root\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.754015 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-node-pullsecrets\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.754084 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-blob-cache\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.754110 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildworkdir\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.754130 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-push\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.754176 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-run\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.754201 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-proxy-ca-bundles\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.754814 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.754853 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755011 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755234 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755362 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-pull\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755396 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-system-configs\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755420 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkrhk\" (UniqueName: \"kubernetes.io/projected/89548461-1f67-4f23-bd0e-fd26e55e73b1-kube-api-access-dkrhk\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755442 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-ca-bundles\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755456 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildcachedir\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755477 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"89548461-1f67-4f23-bd0e-fd26e55e73b1\" (UID: \"89548461-1f67-4f23-bd0e-fd26e55e73b1\") " Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755613 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755676 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755687 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755696 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.755704 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.756330 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.756529 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.756581 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.756870 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.759439 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-push" (OuterVolumeSpecName: "builder-dockercfg-mfv8j-push") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "builder-dockercfg-mfv8j-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.759463 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-pull" (OuterVolumeSpecName: "builder-dockercfg-mfv8j-pull") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "builder-dockercfg-mfv8j-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.760001 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.760508 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89548461-1f67-4f23-bd0e-fd26e55e73b1-kube-api-access-dkrhk" (OuterVolumeSpecName: "kube-api-access-dkrhk") pod "89548461-1f67-4f23-bd0e-fd26e55e73b1" (UID: "89548461-1f67-4f23-bd0e-fd26e55e73b1"). InnerVolumeSpecName "kube-api-access-dkrhk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.856838 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-mfv8j-push\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-push\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.857106 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/89548461-1f67-4f23-bd0e-fd26e55e73b1-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.857212 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.857328 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-mfv8j-pull\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-builder-dockercfg-mfv8j-pull\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.857527 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.857611 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dkrhk\" (UniqueName: \"kubernetes.io/projected/89548461-1f67-4f23-bd0e-fd26e55e73b1-kube-api-access-dkrhk\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.857624 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89548461-1f67-4f23-bd0e-fd26e55e73b1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.857634 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/89548461-1f67-4f23-bd0e-fd26e55e73b1-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.857647 5120 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/89548461-1f67-4f23-bd0e-fd26e55e73b1-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.871344 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-c4fnz"] Dec 11 16:13:48 crc kubenswrapper[5120]: E1211 16:13:48.943025 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:13:48 crc kubenswrapper[5120]: E1211 16:13:48.943356 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2c599,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-c4fnz_service-telemetry(bd2e2674-1f6d-485e-93f4-33023376b5e0): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:13:48 crc kubenswrapper[5120]: E1211 16:13:48.944921 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-c4fnz" podUID="bd2e2674-1f6d-485e-93f4-33023376b5e0" Dec 11 16:13:48 crc kubenswrapper[5120]: I1211 16:13:48.997643 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 16:13:49 crc kubenswrapper[5120]: I1211 16:13:49.006713 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-4-build"] Dec 11 16:13:49 crc kubenswrapper[5120]: I1211 16:13:49.035785 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89548461-1f67-4f23-bd0e-fd26e55e73b1" path="/var/lib/kubelet/pods/89548461-1f67-4f23-bd0e-fd26e55e73b1/volumes" Dec 11 16:13:49 crc kubenswrapper[5120]: I1211 16:13:49.671013 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-c4fnz" event={"ID":"bd2e2674-1f6d-485e-93f4-33023376b5e0","Type":"ContainerStarted","Data":"94b866ac4a215e395459756452aadc09179afdc5964314865d093b1306fa91df"} Dec 11 16:13:49 crc kubenswrapper[5120]: E1211 16:13:49.672082 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-c4fnz" podUID="bd2e2674-1f6d-485e-93f4-33023376b5e0" Dec 11 16:13:50 crc kubenswrapper[5120]: E1211 16:13:50.682275 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-c4fnz" podUID="bd2e2674-1f6d-485e-93f4-33023376b5e0" Dec 11 16:13:53 crc kubenswrapper[5120]: I1211 16:13:53.399647 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-c4fnz"] Dec 11 16:13:53 crc kubenswrapper[5120]: I1211 16:13:53.705891 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-c4fnz" Dec 11 16:13:53 crc kubenswrapper[5120]: I1211 16:13:53.706274 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-c4fnz" event={"ID":"bd2e2674-1f6d-485e-93f4-33023376b5e0","Type":"ContainerDied","Data":"94b866ac4a215e395459756452aadc09179afdc5964314865d093b1306fa91df"} Dec 11 16:13:53 crc kubenswrapper[5120]: I1211 16:13:53.741525 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c599\" (UniqueName: \"kubernetes.io/projected/bd2e2674-1f6d-485e-93f4-33023376b5e0-kube-api-access-2c599\") pod \"bd2e2674-1f6d-485e-93f4-33023376b5e0\" (UID: \"bd2e2674-1f6d-485e-93f4-33023376b5e0\") " Dec 11 16:13:53 crc kubenswrapper[5120]: I1211 16:13:53.751279 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd2e2674-1f6d-485e-93f4-33023376b5e0-kube-api-access-2c599" (OuterVolumeSpecName: "kube-api-access-2c599") pod "bd2e2674-1f6d-485e-93f4-33023376b5e0" (UID: "bd2e2674-1f6d-485e-93f4-33023376b5e0"). InnerVolumeSpecName "kube-api-access-2c599". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:13:53 crc kubenswrapper[5120]: I1211 16:13:53.843195 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2c599\" (UniqueName: \"kubernetes.io/projected/bd2e2674-1f6d-485e-93f4-33023376b5e0-kube-api-access-2c599\") on node \"crc\" DevicePath \"\"" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.206303 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-nxhlp"] Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.207034 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="89548461-1f67-4f23-bd0e-fd26e55e73b1" containerName="git-clone" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.207054 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="89548461-1f67-4f23-bd0e-fd26e55e73b1" containerName="git-clone" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.207220 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="89548461-1f67-4f23-bd0e-fd26e55e73b1" containerName="git-clone" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.214702 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-nxhlp" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.217027 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-nxhlp"] Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.250048 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzkhz\" (UniqueName: \"kubernetes.io/projected/847525ea-e1cb-43ed-98e3-91baecb73494-kube-api-access-kzkhz\") pod \"infrawatch-operators-nxhlp\" (UID: \"847525ea-e1cb-43ed-98e3-91baecb73494\") " pod="service-telemetry/infrawatch-operators-nxhlp" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.351592 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kzkhz\" (UniqueName: \"kubernetes.io/projected/847525ea-e1cb-43ed-98e3-91baecb73494-kube-api-access-kzkhz\") pod \"infrawatch-operators-nxhlp\" (UID: \"847525ea-e1cb-43ed-98e3-91baecb73494\") " pod="service-telemetry/infrawatch-operators-nxhlp" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.375223 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzkhz\" (UniqueName: \"kubernetes.io/projected/847525ea-e1cb-43ed-98e3-91baecb73494-kube-api-access-kzkhz\") pod \"infrawatch-operators-nxhlp\" (UID: \"847525ea-e1cb-43ed-98e3-91baecb73494\") " pod="service-telemetry/infrawatch-operators-nxhlp" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.546188 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-nxhlp" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.721303 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-c4fnz" Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.787118 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-c4fnz"] Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.790657 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-c4fnz"] Dec 11 16:13:54 crc kubenswrapper[5120]: I1211 16:13:54.835442 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-nxhlp"] Dec 11 16:13:54 crc kubenswrapper[5120]: E1211 16:13:54.912051 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:13:54 crc kubenswrapper[5120]: E1211 16:13:54.912393 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzkhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-nxhlp_service-telemetry(847525ea-e1cb-43ed-98e3-91baecb73494): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:13:54 crc kubenswrapper[5120]: E1211 16:13:54.913610 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:13:55 crc kubenswrapper[5120]: I1211 16:13:55.030208 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd2e2674-1f6d-485e-93f4-33023376b5e0" path="/var/lib/kubelet/pods/bd2e2674-1f6d-485e-93f4-33023376b5e0/volumes" Dec 11 16:13:55 crc kubenswrapper[5120]: I1211 16:13:55.733635 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-nxhlp" event={"ID":"847525ea-e1cb-43ed-98e3-91baecb73494","Type":"ContainerStarted","Data":"964c4ce06207000e2d1892b66bf73d4bdced662471dfaddc97da2a32fea4abfa"} Dec 11 16:13:55 crc kubenswrapper[5120]: E1211 16:13:55.734947 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:13:56 crc kubenswrapper[5120]: E1211 16:13:56.748145 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:13:58 crc kubenswrapper[5120]: I1211 16:13:58.717997 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:13:58 crc kubenswrapper[5120]: I1211 16:13:58.718416 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:14:09 crc kubenswrapper[5120]: E1211 16:14:09.100314 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:14:09 crc kubenswrapper[5120]: E1211 16:14:09.100949 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzkhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-nxhlp_service-telemetry(847525ea-e1cb-43ed-98e3-91baecb73494): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:14:09 crc kubenswrapper[5120]: E1211 16:14:09.102391 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:14:23 crc kubenswrapper[5120]: E1211 16:14:23.022843 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:14:28 crc kubenswrapper[5120]: I1211 16:14:28.717857 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:14:28 crc kubenswrapper[5120]: I1211 16:14:28.718397 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:14:36 crc kubenswrapper[5120]: E1211 16:14:36.078110 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:14:36 crc kubenswrapper[5120]: E1211 16:14:36.078719 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzkhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-nxhlp_service-telemetry(847525ea-e1cb-43ed-98e3-91baecb73494): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:14:36 crc kubenswrapper[5120]: E1211 16:14:36.079872 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:14:47 crc kubenswrapper[5120]: E1211 16:14:47.022834 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.464340 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8hwbr"] Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.473237 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.497959 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8hwbr"] Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.549014 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-utilities\") pod \"community-operators-8hwbr\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.549438 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-catalog-content\") pod \"community-operators-8hwbr\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.549488 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqqn7\" (UniqueName: \"kubernetes.io/projected/e0ef7396-0521-45c4-9c34-9e500167c705-kube-api-access-gqqn7\") pod \"community-operators-8hwbr\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.651111 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-utilities\") pod \"community-operators-8hwbr\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.651171 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-catalog-content\") pod \"community-operators-8hwbr\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.651210 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gqqn7\" (UniqueName: \"kubernetes.io/projected/e0ef7396-0521-45c4-9c34-9e500167c705-kube-api-access-gqqn7\") pod \"community-operators-8hwbr\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.652062 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-utilities\") pod \"community-operators-8hwbr\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.652356 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-catalog-content\") pod \"community-operators-8hwbr\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.676479 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqqn7\" (UniqueName: \"kubernetes.io/projected/e0ef7396-0521-45c4-9c34-9e500167c705-kube-api-access-gqqn7\") pod \"community-operators-8hwbr\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:53 crc kubenswrapper[5120]: I1211 16:14:53.848998 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:14:54 crc kubenswrapper[5120]: I1211 16:14:54.260187 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8hwbr"] Dec 11 16:14:54 crc kubenswrapper[5120]: W1211 16:14:54.267615 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ef7396_0521_45c4_9c34_9e500167c705.slice/crio-94731c1adf5394c9518a5888fea157374a20c54b14d018cc9369606983ff0b1e WatchSource:0}: Error finding container 94731c1adf5394c9518a5888fea157374a20c54b14d018cc9369606983ff0b1e: Status 404 returned error can't find the container with id 94731c1adf5394c9518a5888fea157374a20c54b14d018cc9369606983ff0b1e Dec 11 16:14:55 crc kubenswrapper[5120]: I1211 16:14:55.217654 5120 generic.go:358] "Generic (PLEG): container finished" podID="e0ef7396-0521-45c4-9c34-9e500167c705" containerID="fec175197af7e887764e0485db909e6981ee52b4acb49513dba9910a35c29cb8" exitCode=0 Dec 11 16:14:55 crc kubenswrapper[5120]: I1211 16:14:55.219404 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hwbr" event={"ID":"e0ef7396-0521-45c4-9c34-9e500167c705","Type":"ContainerDied","Data":"fec175197af7e887764e0485db909e6981ee52b4acb49513dba9910a35c29cb8"} Dec 11 16:14:55 crc kubenswrapper[5120]: I1211 16:14:55.221133 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hwbr" event={"ID":"e0ef7396-0521-45c4-9c34-9e500167c705","Type":"ContainerStarted","Data":"94731c1adf5394c9518a5888fea157374a20c54b14d018cc9369606983ff0b1e"} Dec 11 16:14:56 crc kubenswrapper[5120]: I1211 16:14:56.229718 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hwbr" event={"ID":"e0ef7396-0521-45c4-9c34-9e500167c705","Type":"ContainerStarted","Data":"8125910a44123ef4bb3b25fffca6ef7af287712366106db326d10678683247f3"} Dec 11 16:14:57 crc kubenswrapper[5120]: I1211 16:14:57.240930 5120 generic.go:358] "Generic (PLEG): container finished" podID="e0ef7396-0521-45c4-9c34-9e500167c705" containerID="8125910a44123ef4bb3b25fffca6ef7af287712366106db326d10678683247f3" exitCode=0 Dec 11 16:14:57 crc kubenswrapper[5120]: I1211 16:14:57.241060 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hwbr" event={"ID":"e0ef7396-0521-45c4-9c34-9e500167c705","Type":"ContainerDied","Data":"8125910a44123ef4bb3b25fffca6ef7af287712366106db326d10678683247f3"} Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.033747 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4txlq"] Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.040928 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.047879 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4txlq"] Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.230043 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-utilities\") pod \"certified-operators-4txlq\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.230387 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcqz2\" (UniqueName: \"kubernetes.io/projected/1134ff1f-8d77-4b2a-9123-e8e8419947c8-kube-api-access-jcqz2\") pod \"certified-operators-4txlq\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.230430 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-catalog-content\") pod \"certified-operators-4txlq\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.260620 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hwbr" event={"ID":"e0ef7396-0521-45c4-9c34-9e500167c705","Type":"ContainerStarted","Data":"d9d90fb4fbade3d090b6f7306c7328805c6fb11ff1285fa6363fe418871e0a80"} Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.286966 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8hwbr" podStartSLOduration=4.4516819739999995 podStartE2EDuration="5.286948452s" podCreationTimestamp="2025-12-11 16:14:53 +0000 UTC" firstStartedPulling="2025-12-11 16:14:55.220476445 +0000 UTC m=+844.474779776" lastFinishedPulling="2025-12-11 16:14:56.055742923 +0000 UTC m=+845.310046254" observedRunningTime="2025-12-11 16:14:58.28417514 +0000 UTC m=+847.538478481" watchObservedRunningTime="2025-12-11 16:14:58.286948452 +0000 UTC m=+847.541251783" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.331302 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-utilities\") pod \"certified-operators-4txlq\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.331367 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jcqz2\" (UniqueName: \"kubernetes.io/projected/1134ff1f-8d77-4b2a-9123-e8e8419947c8-kube-api-access-jcqz2\") pod \"certified-operators-4txlq\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.331407 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-catalog-content\") pod \"certified-operators-4txlq\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.331897 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-utilities\") pod \"certified-operators-4txlq\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.331890 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-catalog-content\") pod \"certified-operators-4txlq\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.358029 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcqz2\" (UniqueName: \"kubernetes.io/projected/1134ff1f-8d77-4b2a-9123-e8e8419947c8-kube-api-access-jcqz2\") pod \"certified-operators-4txlq\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.376846 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.609884 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4txlq"] Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.718060 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.718339 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.718397 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.718900 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1c4951fd13c7ebf545cc70952dba6bad301362a8233620d9c4df1820bb44170"} pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 16:14:58 crc kubenswrapper[5120]: I1211 16:14:58.718960 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" containerID="cri-o://c1c4951fd13c7ebf545cc70952dba6bad301362a8233620d9c4df1820bb44170" gracePeriod=600 Dec 11 16:14:59 crc kubenswrapper[5120]: I1211 16:14:59.271069 5120 generic.go:358] "Generic (PLEG): container finished" podID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerID="c1c4951fd13c7ebf545cc70952dba6bad301362a8233620d9c4df1820bb44170" exitCode=0 Dec 11 16:14:59 crc kubenswrapper[5120]: I1211 16:14:59.271229 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerDied","Data":"c1c4951fd13c7ebf545cc70952dba6bad301362a8233620d9c4df1820bb44170"} Dec 11 16:14:59 crc kubenswrapper[5120]: I1211 16:14:59.271977 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"58d458b8c88e48677a4cc48872bb0622adbaa7a6b5faa341f2ec0189bf671557"} Dec 11 16:14:59 crc kubenswrapper[5120]: I1211 16:14:59.272012 5120 scope.go:117] "RemoveContainer" containerID="a09fb695df5d1b3ee680128c4cd59d89388d5ef467e74023daef155b556f17c3" Dec 11 16:14:59 crc kubenswrapper[5120]: I1211 16:14:59.275409 5120 generic.go:358] "Generic (PLEG): container finished" podID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerID="8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1" exitCode=0 Dec 11 16:14:59 crc kubenswrapper[5120]: I1211 16:14:59.275537 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4txlq" event={"ID":"1134ff1f-8d77-4b2a-9123-e8e8419947c8","Type":"ContainerDied","Data":"8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1"} Dec 11 16:14:59 crc kubenswrapper[5120]: I1211 16:14:59.275581 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4txlq" event={"ID":"1134ff1f-8d77-4b2a-9123-e8e8419947c8","Type":"ContainerStarted","Data":"d54bb0be580e6f413698285f88ede535855428e892e0d34ceb74458f3dc01807"} Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.162392 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd"] Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.169591 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.172397 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.172650 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.177275 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd"] Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.257722 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-secret-volume\") pod \"collect-profiles-29424495-2b2qd\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.257796 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-config-volume\") pod \"collect-profiles-29424495-2b2qd\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.257843 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5xt\" (UniqueName: \"kubernetes.io/projected/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-kube-api-access-2q5xt\") pod \"collect-profiles-29424495-2b2qd\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.285259 5120 generic.go:358] "Generic (PLEG): container finished" podID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerID="9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c" exitCode=0 Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.285332 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4txlq" event={"ID":"1134ff1f-8d77-4b2a-9123-e8e8419947c8","Type":"ContainerDied","Data":"9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c"} Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.359059 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-secret-volume\") pod \"collect-profiles-29424495-2b2qd\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.359489 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-config-volume\") pod \"collect-profiles-29424495-2b2qd\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.359609 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2q5xt\" (UniqueName: \"kubernetes.io/projected/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-kube-api-access-2q5xt\") pod \"collect-profiles-29424495-2b2qd\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.360824 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-config-volume\") pod \"collect-profiles-29424495-2b2qd\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.368785 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-secret-volume\") pod \"collect-profiles-29424495-2b2qd\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.376269 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q5xt\" (UniqueName: \"kubernetes.io/projected/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-kube-api-access-2q5xt\") pod \"collect-profiles-29424495-2b2qd\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.493548 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:00 crc kubenswrapper[5120]: I1211 16:15:00.882006 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd"] Dec 11 16:15:01 crc kubenswrapper[5120]: E1211 16:15:01.028259 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:15:01 crc kubenswrapper[5120]: I1211 16:15:01.292449 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4txlq" event={"ID":"1134ff1f-8d77-4b2a-9123-e8e8419947c8","Type":"ContainerStarted","Data":"0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28"} Dec 11 16:15:01 crc kubenswrapper[5120]: I1211 16:15:01.293840 5120 generic.go:358] "Generic (PLEG): container finished" podID="d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6" containerID="e3663536666bd77e0bb4cb16b40b138b32d40f4aed1d162150691727b9876aea" exitCode=0 Dec 11 16:15:01 crc kubenswrapper[5120]: I1211 16:15:01.293886 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" event={"ID":"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6","Type":"ContainerDied","Data":"e3663536666bd77e0bb4cb16b40b138b32d40f4aed1d162150691727b9876aea"} Dec 11 16:15:01 crc kubenswrapper[5120]: I1211 16:15:01.293931 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" event={"ID":"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6","Type":"ContainerStarted","Data":"4c92b8a001b70bca14a0b247f4706307322706e82b087b454584208e151bb606"} Dec 11 16:15:01 crc kubenswrapper[5120]: I1211 16:15:01.310251 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4txlq" podStartSLOduration=2.7396254840000003 podStartE2EDuration="3.310239698s" podCreationTimestamp="2025-12-11 16:14:58 +0000 UTC" firstStartedPulling="2025-12-11 16:14:59.276941499 +0000 UTC m=+848.531244870" lastFinishedPulling="2025-12-11 16:14:59.847555753 +0000 UTC m=+849.101859084" observedRunningTime="2025-12-11 16:15:01.306309336 +0000 UTC m=+850.560612667" watchObservedRunningTime="2025-12-11 16:15:01.310239698 +0000 UTC m=+850.564543029" Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.482611 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.590070 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-config-volume\") pod \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.590183 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-secret-volume\") pod \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.590261 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q5xt\" (UniqueName: \"kubernetes.io/projected/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-kube-api-access-2q5xt\") pod \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\" (UID: \"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6\") " Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.590751 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6" (UID: "d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.591684 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.596027 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6" (UID: "d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.596288 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-kube-api-access-2q5xt" (OuterVolumeSpecName: "kube-api-access-2q5xt") pod "d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6" (UID: "d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6"). InnerVolumeSpecName "kube-api-access-2q5xt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.692696 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 16:15:02 crc kubenswrapper[5120]: I1211 16:15:02.692741 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2q5xt\" (UniqueName: \"kubernetes.io/projected/d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6-kube-api-access-2q5xt\") on node \"crc\" DevicePath \"\"" Dec 11 16:15:03 crc kubenswrapper[5120]: I1211 16:15:03.306968 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" Dec 11 16:15:03 crc kubenswrapper[5120]: I1211 16:15:03.307037 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424495-2b2qd" event={"ID":"d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6","Type":"ContainerDied","Data":"4c92b8a001b70bca14a0b247f4706307322706e82b087b454584208e151bb606"} Dec 11 16:15:03 crc kubenswrapper[5120]: I1211 16:15:03.307188 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c92b8a001b70bca14a0b247f4706307322706e82b087b454584208e151bb606" Dec 11 16:15:03 crc kubenswrapper[5120]: I1211 16:15:03.850110 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:15:03 crc kubenswrapper[5120]: I1211 16:15:03.850174 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:15:03 crc kubenswrapper[5120]: I1211 16:15:03.885737 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:15:04 crc kubenswrapper[5120]: I1211 16:15:04.350653 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:15:05 crc kubenswrapper[5120]: I1211 16:15:05.030377 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8hwbr"] Dec 11 16:15:06 crc kubenswrapper[5120]: I1211 16:15:06.327607 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8hwbr" podUID="e0ef7396-0521-45c4-9c34-9e500167c705" containerName="registry-server" containerID="cri-o://d9d90fb4fbade3d090b6f7306c7328805c6fb11ff1285fa6363fe418871e0a80" gracePeriod=2 Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.341967 5120 generic.go:358] "Generic (PLEG): container finished" podID="e0ef7396-0521-45c4-9c34-9e500167c705" containerID="d9d90fb4fbade3d090b6f7306c7328805c6fb11ff1285fa6363fe418871e0a80" exitCode=0 Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.342313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hwbr" event={"ID":"e0ef7396-0521-45c4-9c34-9e500167c705","Type":"ContainerDied","Data":"d9d90fb4fbade3d090b6f7306c7328805c6fb11ff1285fa6363fe418871e0a80"} Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.403945 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.459559 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-catalog-content\") pod \"e0ef7396-0521-45c4-9c34-9e500167c705\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.459773 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-utilities\") pod \"e0ef7396-0521-45c4-9c34-9e500167c705\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.459842 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqqn7\" (UniqueName: \"kubernetes.io/projected/e0ef7396-0521-45c4-9c34-9e500167c705-kube-api-access-gqqn7\") pod \"e0ef7396-0521-45c4-9c34-9e500167c705\" (UID: \"e0ef7396-0521-45c4-9c34-9e500167c705\") " Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.461926 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-utilities" (OuterVolumeSpecName: "utilities") pod "e0ef7396-0521-45c4-9c34-9e500167c705" (UID: "e0ef7396-0521-45c4-9c34-9e500167c705"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.472485 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ef7396-0521-45c4-9c34-9e500167c705-kube-api-access-gqqn7" (OuterVolumeSpecName: "kube-api-access-gqqn7") pod "e0ef7396-0521-45c4-9c34-9e500167c705" (UID: "e0ef7396-0521-45c4-9c34-9e500167c705"). InnerVolumeSpecName "kube-api-access-gqqn7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.515288 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0ef7396-0521-45c4-9c34-9e500167c705" (UID: "e0ef7396-0521-45c4-9c34-9e500167c705"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.561069 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gqqn7\" (UniqueName: \"kubernetes.io/projected/e0ef7396-0521-45c4-9c34-9e500167c705-kube-api-access-gqqn7\") on node \"crc\" DevicePath \"\"" Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.561106 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:15:07 crc kubenswrapper[5120]: I1211 16:15:07.561117 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ef7396-0521-45c4-9c34-9e500167c705-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.351820 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hwbr" event={"ID":"e0ef7396-0521-45c4-9c34-9e500167c705","Type":"ContainerDied","Data":"94731c1adf5394c9518a5888fea157374a20c54b14d018cc9369606983ff0b1e"} Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.351873 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hwbr" Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.352402 5120 scope.go:117] "RemoveContainer" containerID="d9d90fb4fbade3d090b6f7306c7328805c6fb11ff1285fa6363fe418871e0a80" Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.375640 5120 scope.go:117] "RemoveContainer" containerID="8125910a44123ef4bb3b25fffca6ef7af287712366106db326d10678683247f3" Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.377037 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.377130 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.392941 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8hwbr"] Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.400407 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8hwbr"] Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.416238 5120 scope.go:117] "RemoveContainer" containerID="fec175197af7e887764e0485db909e6981ee52b4acb49513dba9910a35c29cb8" Dec 11 16:15:08 crc kubenswrapper[5120]: I1211 16:15:08.419704 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:15:09 crc kubenswrapper[5120]: I1211 16:15:09.034189 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ef7396-0521-45c4-9c34-9e500167c705" path="/var/lib/kubelet/pods/e0ef7396-0521-45c4-9c34-9e500167c705/volumes" Dec 11 16:15:09 crc kubenswrapper[5120]: I1211 16:15:09.408831 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:15:10 crc kubenswrapper[5120]: I1211 16:15:10.625830 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4txlq"] Dec 11 16:15:12 crc kubenswrapper[5120]: I1211 16:15:12.379054 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4txlq" podUID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerName="registry-server" containerID="cri-o://0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28" gracePeriod=2 Dec 11 16:15:12 crc kubenswrapper[5120]: I1211 16:15:12.835718 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:15:12 crc kubenswrapper[5120]: I1211 16:15:12.950361 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-utilities\") pod \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " Dec 11 16:15:12 crc kubenswrapper[5120]: I1211 16:15:12.950505 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-catalog-content\") pod \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " Dec 11 16:15:12 crc kubenswrapper[5120]: I1211 16:15:12.950550 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcqz2\" (UniqueName: \"kubernetes.io/projected/1134ff1f-8d77-4b2a-9123-e8e8419947c8-kube-api-access-jcqz2\") pod \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\" (UID: \"1134ff1f-8d77-4b2a-9123-e8e8419947c8\") " Dec 11 16:15:12 crc kubenswrapper[5120]: I1211 16:15:12.951621 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-utilities" (OuterVolumeSpecName: "utilities") pod "1134ff1f-8d77-4b2a-9123-e8e8419947c8" (UID: "1134ff1f-8d77-4b2a-9123-e8e8419947c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:15:12 crc kubenswrapper[5120]: I1211 16:15:12.952000 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:15:12 crc kubenswrapper[5120]: I1211 16:15:12.959701 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1134ff1f-8d77-4b2a-9123-e8e8419947c8-kube-api-access-jcqz2" (OuterVolumeSpecName: "kube-api-access-jcqz2") pod "1134ff1f-8d77-4b2a-9123-e8e8419947c8" (UID: "1134ff1f-8d77-4b2a-9123-e8e8419947c8"). InnerVolumeSpecName "kube-api-access-jcqz2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:15:12 crc kubenswrapper[5120]: I1211 16:15:12.993514 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1134ff1f-8d77-4b2a-9123-e8e8419947c8" (UID: "1134ff1f-8d77-4b2a-9123-e8e8419947c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.054240 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1134ff1f-8d77-4b2a-9123-e8e8419947c8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.054698 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jcqz2\" (UniqueName: \"kubernetes.io/projected/1134ff1f-8d77-4b2a-9123-e8e8419947c8-kube-api-access-jcqz2\") on node \"crc\" DevicePath \"\"" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.388340 5120 generic.go:358] "Generic (PLEG): container finished" podID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerID="0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28" exitCode=0 Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.388420 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4txlq" event={"ID":"1134ff1f-8d77-4b2a-9123-e8e8419947c8","Type":"ContainerDied","Data":"0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28"} Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.388452 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4txlq" event={"ID":"1134ff1f-8d77-4b2a-9123-e8e8419947c8","Type":"ContainerDied","Data":"d54bb0be580e6f413698285f88ede535855428e892e0d34ceb74458f3dc01807"} Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.388473 5120 scope.go:117] "RemoveContainer" containerID="0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.388534 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4txlq" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.414930 5120 scope.go:117] "RemoveContainer" containerID="9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.420657 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4txlq"] Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.431906 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4txlq"] Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.437926 5120 scope.go:117] "RemoveContainer" containerID="8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.461215 5120 scope.go:117] "RemoveContainer" containerID="0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28" Dec 11 16:15:13 crc kubenswrapper[5120]: E1211 16:15:13.461749 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28\": container with ID starting with 0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28 not found: ID does not exist" containerID="0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.461802 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28"} err="failed to get container status \"0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28\": rpc error: code = NotFound desc = could not find container \"0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28\": container with ID starting with 0f17c790a2026f77258c6e697a3dde52d9a4b8747f351ddf7fcbcad2b5b9de28 not found: ID does not exist" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.461835 5120 scope.go:117] "RemoveContainer" containerID="9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c" Dec 11 16:15:13 crc kubenswrapper[5120]: E1211 16:15:13.462395 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c\": container with ID starting with 9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c not found: ID does not exist" containerID="9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.462457 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c"} err="failed to get container status \"9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c\": rpc error: code = NotFound desc = could not find container \"9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c\": container with ID starting with 9bfedab1e139082dcffe49602d387555eefb49d7fd6762e7923c11d92d80249c not found: ID does not exist" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.462496 5120 scope.go:117] "RemoveContainer" containerID="8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1" Dec 11 16:15:13 crc kubenswrapper[5120]: E1211 16:15:13.462920 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1\": container with ID starting with 8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1 not found: ID does not exist" containerID="8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1" Dec 11 16:15:13 crc kubenswrapper[5120]: I1211 16:15:13.462969 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1"} err="failed to get container status \"8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1\": rpc error: code = NotFound desc = could not find container \"8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1\": container with ID starting with 8ba417cc90e6cfea58e678789ae5058bbc3d2268b159dbaa8ff1bc66231d66a1 not found: ID does not exist" Dec 11 16:15:15 crc kubenswrapper[5120]: I1211 16:15:15.037315 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" path="/var/lib/kubelet/pods/1134ff1f-8d77-4b2a-9123-e8e8419947c8/volumes" Dec 11 16:15:16 crc kubenswrapper[5120]: E1211 16:15:16.022616 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:15:29 crc kubenswrapper[5120]: E1211 16:15:29.090120 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:15:29 crc kubenswrapper[5120]: E1211 16:15:29.090982 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzkhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-nxhlp_service-telemetry(847525ea-e1cb-43ed-98e3-91baecb73494): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:15:29 crc kubenswrapper[5120]: E1211 16:15:29.092240 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:15:44 crc kubenswrapper[5120]: E1211 16:15:44.022404 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:15:51 crc kubenswrapper[5120]: I1211 16:15:51.284193 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:15:51 crc kubenswrapper[5120]: I1211 16:15:51.287000 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:15:51 crc kubenswrapper[5120]: I1211 16:15:51.292366 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:15:51 crc kubenswrapper[5120]: I1211 16:15:51.295095 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:15:55 crc kubenswrapper[5120]: E1211 16:15:55.023138 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:16:08 crc kubenswrapper[5120]: E1211 16:16:08.023801 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:16:19 crc kubenswrapper[5120]: E1211 16:16:19.022686 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:16:30 crc kubenswrapper[5120]: I1211 16:16:30.023006 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 16:16:30 crc kubenswrapper[5120]: E1211 16:16:30.023812 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:16:43 crc kubenswrapper[5120]: E1211 16:16:43.022867 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:16:54 crc kubenswrapper[5120]: E1211 16:16:54.102105 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:16:54 crc kubenswrapper[5120]: E1211 16:16:54.103080 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzkhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-nxhlp_service-telemetry(847525ea-e1cb-43ed-98e3-91baecb73494): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:16:54 crc kubenswrapper[5120]: E1211 16:16:54.104379 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:16:58 crc kubenswrapper[5120]: I1211 16:16:58.718698 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:16:58 crc kubenswrapper[5120]: I1211 16:16:58.719259 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:17:06 crc kubenswrapper[5120]: E1211 16:17:06.022882 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:17:21 crc kubenswrapper[5120]: E1211 16:17:21.045468 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:17:28 crc kubenswrapper[5120]: I1211 16:17:28.718264 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:17:28 crc kubenswrapper[5120]: I1211 16:17:28.718850 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:17:32 crc kubenswrapper[5120]: E1211 16:17:32.321529 5120 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 11 16:17:33 crc kubenswrapper[5120]: E1211 16:17:33.022083 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:17:34 crc kubenswrapper[5120]: I1211 16:17:34.389065 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 11 16:17:34 crc kubenswrapper[5120]: I1211 16:17:34.403251 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 11 16:17:34 crc kubenswrapper[5120]: I1211 16:17:34.435574 5120 ???:1] "http: TLS handshake error from 192.168.126.11:52802: no serving certificate available for the kubelet" Dec 11 16:17:34 crc kubenswrapper[5120]: I1211 16:17:34.472039 5120 ???:1] "http: TLS handshake error from 192.168.126.11:52806: no serving certificate available for the kubelet" Dec 11 16:17:34 crc kubenswrapper[5120]: I1211 16:17:34.515082 5120 ???:1] "http: TLS handshake error from 192.168.126.11:52816: no serving certificate available for the kubelet" Dec 11 16:17:34 crc kubenswrapper[5120]: I1211 16:17:34.569117 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34786: no serving certificate available for the kubelet" Dec 11 16:17:34 crc kubenswrapper[5120]: I1211 16:17:34.644123 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34800: no serving certificate available for the kubelet" Dec 11 16:17:34 crc kubenswrapper[5120]: I1211 16:17:34.749984 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34814: no serving certificate available for the kubelet" Dec 11 16:17:34 crc kubenswrapper[5120]: I1211 16:17:34.941585 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34816: no serving certificate available for the kubelet" Dec 11 16:17:35 crc kubenswrapper[5120]: I1211 16:17:35.300127 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34830: no serving certificate available for the kubelet" Dec 11 16:17:35 crc kubenswrapper[5120]: I1211 16:17:35.963611 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34832: no serving certificate available for the kubelet" Dec 11 16:17:37 crc kubenswrapper[5120]: I1211 16:17:37.261795 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34848: no serving certificate available for the kubelet" Dec 11 16:17:39 crc kubenswrapper[5120]: I1211 16:17:39.853050 5120 ???:1] "http: TLS handshake error from 192.168.126.11:34852: no serving certificate available for the kubelet" Dec 11 16:17:44 crc kubenswrapper[5120]: E1211 16:17:44.022502 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:17:44 crc kubenswrapper[5120]: I1211 16:17:44.995946 5120 ???:1] "http: TLS handshake error from 192.168.126.11:56084: no serving certificate available for the kubelet" Dec 11 16:17:55 crc kubenswrapper[5120]: I1211 16:17:55.264047 5120 ???:1] "http: TLS handshake error from 192.168.126.11:44306: no serving certificate available for the kubelet" Dec 11 16:17:58 crc kubenswrapper[5120]: I1211 16:17:58.718328 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:17:58 crc kubenswrapper[5120]: I1211 16:17:58.718721 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:17:58 crc kubenswrapper[5120]: I1211 16:17:58.718786 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:17:58 crc kubenswrapper[5120]: I1211 16:17:58.719606 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58d458b8c88e48677a4cc48872bb0622adbaa7a6b5faa341f2ec0189bf671557"} pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 16:17:58 crc kubenswrapper[5120]: I1211 16:17:58.719711 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" containerID="cri-o://58d458b8c88e48677a4cc48872bb0622adbaa7a6b5faa341f2ec0189bf671557" gracePeriod=600 Dec 11 16:17:59 crc kubenswrapper[5120]: E1211 16:17:59.029023 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:17:59 crc kubenswrapper[5120]: I1211 16:17:59.547805 5120 generic.go:358] "Generic (PLEG): container finished" podID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerID="58d458b8c88e48677a4cc48872bb0622adbaa7a6b5faa341f2ec0189bf671557" exitCode=0 Dec 11 16:17:59 crc kubenswrapper[5120]: I1211 16:17:59.547853 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerDied","Data":"58d458b8c88e48677a4cc48872bb0622adbaa7a6b5faa341f2ec0189bf671557"} Dec 11 16:17:59 crc kubenswrapper[5120]: I1211 16:17:59.548326 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"be54f60168a07a87c59bd7822a9702781ae5fe436bdb09971bcc65a678e8872b"} Dec 11 16:17:59 crc kubenswrapper[5120]: I1211 16:17:59.548347 5120 scope.go:117] "RemoveContainer" containerID="c1c4951fd13c7ebf545cc70952dba6bad301362a8233620d9c4df1820bb44170" Dec 11 16:18:14 crc kubenswrapper[5120]: E1211 16:18:14.021971 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:18:15 crc kubenswrapper[5120]: I1211 16:18:15.778906 5120 ???:1] "http: TLS handshake error from 192.168.126.11:58122: no serving certificate available for the kubelet" Dec 11 16:18:26 crc kubenswrapper[5120]: E1211 16:18:26.022000 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:18:40 crc kubenswrapper[5120]: E1211 16:18:40.022401 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:18:52 crc kubenswrapper[5120]: E1211 16:18:52.022529 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.953456 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-vhlbh"] Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954494 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e0ef7396-0521-45c4-9c34-9e500167c705" containerName="registry-server" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954526 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ef7396-0521-45c4-9c34-9e500167c705" containerName="registry-server" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954563 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerName="extract-utilities" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954575 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerName="extract-utilities" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954601 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerName="extract-content" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954610 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerName="extract-content" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954632 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e0ef7396-0521-45c4-9c34-9e500167c705" containerName="extract-content" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954643 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ef7396-0521-45c4-9c34-9e500167c705" containerName="extract-content" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954671 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6" containerName="collect-profiles" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954683 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6" containerName="collect-profiles" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954712 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerName="registry-server" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954720 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerName="registry-server" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954733 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e0ef7396-0521-45c4-9c34-9e500167c705" containerName="extract-utilities" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954741 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ef7396-0521-45c4-9c34-9e500167c705" containerName="extract-utilities" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954880 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1134ff1f-8d77-4b2a-9123-e8e8419947c8" containerName="registry-server" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954894 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d142f3ba-a4ea-4fb7-a811-34f8cc99c6e6" containerName="collect-profiles" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.954911 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e0ef7396-0521-45c4-9c34-9e500167c705" containerName="registry-server" Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.967023 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vhlbh"] Dec 11 16:18:52 crc kubenswrapper[5120]: I1211 16:18:52.967212 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vhlbh" Dec 11 16:18:53 crc kubenswrapper[5120]: I1211 16:18:53.034577 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b4bw\" (UniqueName: \"kubernetes.io/projected/89d28d44-5839-49aa-8893-f6eb0e3c79d7-kube-api-access-5b4bw\") pod \"infrawatch-operators-vhlbh\" (UID: \"89d28d44-5839-49aa-8893-f6eb0e3c79d7\") " pod="service-telemetry/infrawatch-operators-vhlbh" Dec 11 16:18:53 crc kubenswrapper[5120]: I1211 16:18:53.135929 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5b4bw\" (UniqueName: \"kubernetes.io/projected/89d28d44-5839-49aa-8893-f6eb0e3c79d7-kube-api-access-5b4bw\") pod \"infrawatch-operators-vhlbh\" (UID: \"89d28d44-5839-49aa-8893-f6eb0e3c79d7\") " pod="service-telemetry/infrawatch-operators-vhlbh" Dec 11 16:18:53 crc kubenswrapper[5120]: I1211 16:18:53.157862 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b4bw\" (UniqueName: \"kubernetes.io/projected/89d28d44-5839-49aa-8893-f6eb0e3c79d7-kube-api-access-5b4bw\") pod \"infrawatch-operators-vhlbh\" (UID: \"89d28d44-5839-49aa-8893-f6eb0e3c79d7\") " pod="service-telemetry/infrawatch-operators-vhlbh" Dec 11 16:18:53 crc kubenswrapper[5120]: I1211 16:18:53.315301 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vhlbh" Dec 11 16:18:53 crc kubenswrapper[5120]: I1211 16:18:53.812067 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vhlbh"] Dec 11 16:18:53 crc kubenswrapper[5120]: E1211 16:18:53.900898 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:18:53 crc kubenswrapper[5120]: E1211 16:18:53.901235 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b4bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vhlbh_service-telemetry(89d28d44-5839-49aa-8893-f6eb0e3c79d7): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:18:53 crc kubenswrapper[5120]: E1211 16:18:53.902990 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:18:54 crc kubenswrapper[5120]: I1211 16:18:54.012212 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vhlbh" event={"ID":"89d28d44-5839-49aa-8893-f6eb0e3c79d7","Type":"ContainerStarted","Data":"ba058af1dd2e61eaaf06f61fcf29904d9059ee880169565b887df330ed907a18"} Dec 11 16:18:54 crc kubenswrapper[5120]: E1211 16:18:54.013329 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:18:55 crc kubenswrapper[5120]: E1211 16:18:55.021138 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:18:56 crc kubenswrapper[5120]: I1211 16:18:56.771198 5120 ???:1] "http: TLS handshake error from 192.168.126.11:47306: no serving certificate available for the kubelet" Dec 11 16:19:04 crc kubenswrapper[5120]: E1211 16:19:04.022396 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:19:06 crc kubenswrapper[5120]: E1211 16:19:06.104755 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:19:06 crc kubenswrapper[5120]: E1211 16:19:06.105017 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b4bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vhlbh_service-telemetry(89d28d44-5839-49aa-8893-f6eb0e3c79d7): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:19:06 crc kubenswrapper[5120]: E1211 16:19:06.106317 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:19:18 crc kubenswrapper[5120]: E1211 16:19:18.022879 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:19:19 crc kubenswrapper[5120]: E1211 16:19:19.031063 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:19:31 crc kubenswrapper[5120]: E1211 16:19:31.105422 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:19:31 crc kubenswrapper[5120]: E1211 16:19:31.106102 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b4bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vhlbh_service-telemetry(89d28d44-5839-49aa-8893-f6eb0e3c79d7): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:19:31 crc kubenswrapper[5120]: E1211 16:19:31.107316 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:19:32 crc kubenswrapper[5120]: E1211 16:19:32.021771 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:19:45 crc kubenswrapper[5120]: E1211 16:19:45.104923 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:19:45 crc kubenswrapper[5120]: E1211 16:19:45.105519 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzkhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-nxhlp_service-telemetry(847525ea-e1cb-43ed-98e3-91baecb73494): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:19:45 crc kubenswrapper[5120]: E1211 16:19:45.106755 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:19:46 crc kubenswrapper[5120]: E1211 16:19:46.046571 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:19:58 crc kubenswrapper[5120]: E1211 16:19:58.022939 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:20:00 crc kubenswrapper[5120]: E1211 16:20:00.023694 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:20:10 crc kubenswrapper[5120]: E1211 16:20:10.023250 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:20:11 crc kubenswrapper[5120]: E1211 16:20:11.028110 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:20:18 crc kubenswrapper[5120]: I1211 16:20:18.712537 5120 ???:1] "http: TLS handshake error from 192.168.126.11:56678: no serving certificate available for the kubelet" Dec 11 16:20:25 crc kubenswrapper[5120]: E1211 16:20:25.023078 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:20:25 crc kubenswrapper[5120]: E1211 16:20:25.106559 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:20:25 crc kubenswrapper[5120]: E1211 16:20:25.106871 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b4bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vhlbh_service-telemetry(89d28d44-5839-49aa-8893-f6eb0e3c79d7): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:20:25 crc kubenswrapper[5120]: E1211 16:20:25.108243 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:20:28 crc kubenswrapper[5120]: I1211 16:20:28.718500 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:20:28 crc kubenswrapper[5120]: I1211 16:20:28.718823 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:20:38 crc kubenswrapper[5120]: E1211 16:20:38.022432 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:20:40 crc kubenswrapper[5120]: E1211 16:20:40.023254 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:20:49 crc kubenswrapper[5120]: E1211 16:20:49.025196 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:20:51 crc kubenswrapper[5120]: E1211 16:20:51.028338 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:20:51 crc kubenswrapper[5120]: I1211 16:20:51.379058 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:20:51 crc kubenswrapper[5120]: I1211 16:20:51.386437 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:20:51 crc kubenswrapper[5120]: I1211 16:20:51.387887 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:20:51 crc kubenswrapper[5120]: I1211 16:20:51.393913 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:20:58 crc kubenswrapper[5120]: I1211 16:20:58.719027 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:20:58 crc kubenswrapper[5120]: I1211 16:20:58.719575 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:21:03 crc kubenswrapper[5120]: E1211 16:21:03.022874 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:21:04 crc kubenswrapper[5120]: E1211 16:21:04.023720 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:21:15 crc kubenswrapper[5120]: E1211 16:21:15.023584 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:21:19 crc kubenswrapper[5120]: E1211 16:21:19.023133 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:21:28 crc kubenswrapper[5120]: I1211 16:21:28.717680 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:21:28 crc kubenswrapper[5120]: I1211 16:21:28.719016 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:21:28 crc kubenswrapper[5120]: I1211 16:21:28.719193 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:21:28 crc kubenswrapper[5120]: I1211 16:21:28.720516 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be54f60168a07a87c59bd7822a9702781ae5fe436bdb09971bcc65a678e8872b"} pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 16:21:28 crc kubenswrapper[5120]: I1211 16:21:28.720671 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" containerID="cri-o://be54f60168a07a87c59bd7822a9702781ae5fe436bdb09971bcc65a678e8872b" gracePeriod=600 Dec 11 16:21:29 crc kubenswrapper[5120]: E1211 16:21:29.025201 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:21:29 crc kubenswrapper[5120]: I1211 16:21:29.760861 5120 generic.go:358] "Generic (PLEG): container finished" podID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerID="be54f60168a07a87c59bd7822a9702781ae5fe436bdb09971bcc65a678e8872b" exitCode=0 Dec 11 16:21:29 crc kubenswrapper[5120]: I1211 16:21:29.760919 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerDied","Data":"be54f60168a07a87c59bd7822a9702781ae5fe436bdb09971bcc65a678e8872b"} Dec 11 16:21:29 crc kubenswrapper[5120]: I1211 16:21:29.761644 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"f84d3fc3de69e8f8987f2cc1a303015749ecefd934dc31b0a7cafd353cfaaf14"} Dec 11 16:21:29 crc kubenswrapper[5120]: I1211 16:21:29.761692 5120 scope.go:117] "RemoveContainer" containerID="58d458b8c88e48677a4cc48872bb0622adbaa7a6b5faa341f2ec0189bf671557" Dec 11 16:21:33 crc kubenswrapper[5120]: I1211 16:21:33.022986 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 16:21:33 crc kubenswrapper[5120]: E1211 16:21:33.024082 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:21:40 crc kubenswrapper[5120]: E1211 16:21:40.022461 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:21:48 crc kubenswrapper[5120]: E1211 16:21:48.078745 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:21:48 crc kubenswrapper[5120]: E1211 16:21:48.079569 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b4bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vhlbh_service-telemetry(89d28d44-5839-49aa-8893-f6eb0e3c79d7): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:21:48 crc kubenswrapper[5120]: E1211 16:21:48.080824 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:21:51 crc kubenswrapper[5120]: E1211 16:21:51.034022 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:21:54 crc kubenswrapper[5120]: I1211 16:21:54.582922 5120 ???:1] "http: TLS handshake error from 192.168.126.11:38070: no serving certificate available for the kubelet" Dec 11 16:22:00 crc kubenswrapper[5120]: E1211 16:22:00.022723 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:22:05 crc kubenswrapper[5120]: E1211 16:22:05.022743 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:22:14 crc kubenswrapper[5120]: E1211 16:22:14.022643 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:22:19 crc kubenswrapper[5120]: E1211 16:22:19.022989 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:22:25 crc kubenswrapper[5120]: E1211 16:22:25.022678 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:22:31 crc kubenswrapper[5120]: E1211 16:22:31.030934 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:22:39 crc kubenswrapper[5120]: E1211 16:22:39.022209 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:22:45 crc kubenswrapper[5120]: E1211 16:22:45.023401 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:22:50 crc kubenswrapper[5120]: E1211 16:22:50.022345 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:22:56 crc kubenswrapper[5120]: E1211 16:22:56.022280 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:23:01 crc kubenswrapper[5120]: E1211 16:23:01.034860 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:23:02 crc kubenswrapper[5120]: I1211 16:23:02.588726 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51458: no serving certificate available for the kubelet" Dec 11 16:23:11 crc kubenswrapper[5120]: E1211 16:23:11.031752 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:23:13 crc kubenswrapper[5120]: E1211 16:23:13.022813 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:23:22 crc kubenswrapper[5120]: E1211 16:23:22.022356 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:23:27 crc kubenswrapper[5120]: E1211 16:23:27.021905 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.589667 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sbgxg"] Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.604382 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.618504 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbgxg"] Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.702336 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cce3f601-a61e-40ed-8c0a-1a3165228df0-catalog-content\") pod \"redhat-operators-sbgxg\" (UID: \"cce3f601-a61e-40ed-8c0a-1a3165228df0\") " pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.702849 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kk8b\" (UniqueName: \"kubernetes.io/projected/cce3f601-a61e-40ed-8c0a-1a3165228df0-kube-api-access-7kk8b\") pod \"redhat-operators-sbgxg\" (UID: \"cce3f601-a61e-40ed-8c0a-1a3165228df0\") " pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.703038 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cce3f601-a61e-40ed-8c0a-1a3165228df0-utilities\") pod \"redhat-operators-sbgxg\" (UID: \"cce3f601-a61e-40ed-8c0a-1a3165228df0\") " pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.804346 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cce3f601-a61e-40ed-8c0a-1a3165228df0-utilities\") pod \"redhat-operators-sbgxg\" (UID: \"cce3f601-a61e-40ed-8c0a-1a3165228df0\") " pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.804403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cce3f601-a61e-40ed-8c0a-1a3165228df0-catalog-content\") pod \"redhat-operators-sbgxg\" (UID: \"cce3f601-a61e-40ed-8c0a-1a3165228df0\") " pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.804456 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7kk8b\" (UniqueName: \"kubernetes.io/projected/cce3f601-a61e-40ed-8c0a-1a3165228df0-kube-api-access-7kk8b\") pod \"redhat-operators-sbgxg\" (UID: \"cce3f601-a61e-40ed-8c0a-1a3165228df0\") " pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.805057 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cce3f601-a61e-40ed-8c0a-1a3165228df0-utilities\") pod \"redhat-operators-sbgxg\" (UID: \"cce3f601-a61e-40ed-8c0a-1a3165228df0\") " pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.805387 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cce3f601-a61e-40ed-8c0a-1a3165228df0-catalog-content\") pod \"redhat-operators-sbgxg\" (UID: \"cce3f601-a61e-40ed-8c0a-1a3165228df0\") " pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.831328 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kk8b\" (UniqueName: \"kubernetes.io/projected/cce3f601-a61e-40ed-8c0a-1a3165228df0-kube-api-access-7kk8b\") pod \"redhat-operators-sbgxg\" (UID: \"cce3f601-a61e-40ed-8c0a-1a3165228df0\") " pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:35 crc kubenswrapper[5120]: I1211 16:23:35.934948 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:36 crc kubenswrapper[5120]: E1211 16:23:36.022684 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:23:36 crc kubenswrapper[5120]: I1211 16:23:36.386827 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbgxg"] Dec 11 16:23:37 crc kubenswrapper[5120]: I1211 16:23:37.153896 5120 generic.go:358] "Generic (PLEG): container finished" podID="cce3f601-a61e-40ed-8c0a-1a3165228df0" containerID="9318e63f14db47d08d3f764d7c34171e02f4c7535a9b504cbd69efa7082e788d" exitCode=0 Dec 11 16:23:37 crc kubenswrapper[5120]: I1211 16:23:37.154086 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbgxg" event={"ID":"cce3f601-a61e-40ed-8c0a-1a3165228df0","Type":"ContainerDied","Data":"9318e63f14db47d08d3f764d7c34171e02f4c7535a9b504cbd69efa7082e788d"} Dec 11 16:23:37 crc kubenswrapper[5120]: I1211 16:23:37.154187 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbgxg" event={"ID":"cce3f601-a61e-40ed-8c0a-1a3165228df0","Type":"ContainerStarted","Data":"20abd9a147eebe716ceb1ff7a6483bd323824d2971aefbd0b20b5873fdca63ac"} Dec 11 16:23:39 crc kubenswrapper[5120]: E1211 16:23:39.022665 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:23:44 crc kubenswrapper[5120]: I1211 16:23:44.203353 5120 generic.go:358] "Generic (PLEG): container finished" podID="cce3f601-a61e-40ed-8c0a-1a3165228df0" containerID="23cd01821c30bc0945022fa991afba797acf8caf74ea74409d85aca2709a7c0e" exitCode=0 Dec 11 16:23:44 crc kubenswrapper[5120]: I1211 16:23:44.203436 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbgxg" event={"ID":"cce3f601-a61e-40ed-8c0a-1a3165228df0","Type":"ContainerDied","Data":"23cd01821c30bc0945022fa991afba797acf8caf74ea74409d85aca2709a7c0e"} Dec 11 16:23:45 crc kubenswrapper[5120]: I1211 16:23:45.215530 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbgxg" event={"ID":"cce3f601-a61e-40ed-8c0a-1a3165228df0","Type":"ContainerStarted","Data":"0bc8d3efc085fcf263d457a6a9329befdd9b424e8506c0caf763abab26c31405"} Dec 11 16:23:45 crc kubenswrapper[5120]: I1211 16:23:45.239202 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sbgxg" podStartSLOduration=3.896527798 podStartE2EDuration="10.23918632s" podCreationTimestamp="2025-12-11 16:23:35 +0000 UTC" firstStartedPulling="2025-12-11 16:23:37.155587537 +0000 UTC m=+1366.409890908" lastFinishedPulling="2025-12-11 16:23:43.498246079 +0000 UTC m=+1372.752549430" observedRunningTime="2025-12-11 16:23:45.237579547 +0000 UTC m=+1374.491882928" watchObservedRunningTime="2025-12-11 16:23:45.23918632 +0000 UTC m=+1374.493489651" Dec 11 16:23:45 crc kubenswrapper[5120]: I1211 16:23:45.935322 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:45 crc kubenswrapper[5120]: I1211 16:23:45.935697 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:46 crc kubenswrapper[5120]: I1211 16:23:46.976521 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sbgxg" podUID="cce3f601-a61e-40ed-8c0a-1a3165228df0" containerName="registry-server" probeResult="failure" output=< Dec 11 16:23:46 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Dec 11 16:23:46 crc kubenswrapper[5120]: > Dec 11 16:23:49 crc kubenswrapper[5120]: E1211 16:23:49.021752 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:23:50 crc kubenswrapper[5120]: E1211 16:23:50.021594 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:23:55 crc kubenswrapper[5120]: I1211 16:23:55.970862 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:56 crc kubenswrapper[5120]: I1211 16:23:56.005898 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sbgxg" Dec 11 16:23:56 crc kubenswrapper[5120]: I1211 16:23:56.159260 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbgxg"] Dec 11 16:23:56 crc kubenswrapper[5120]: I1211 16:23:56.334748 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4zt8l"] Dec 11 16:23:56 crc kubenswrapper[5120]: I1211 16:23:56.335047 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4zt8l" podUID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerName="registry-server" containerID="cri-o://9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124" gracePeriod=2 Dec 11 16:23:58 crc kubenswrapper[5120]: I1211 16:23:58.717925 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:23:58 crc kubenswrapper[5120]: I1211 16:23:58.718250 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.317318 5120 generic.go:358] "Generic (PLEG): container finished" podID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerID="9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124" exitCode=0 Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.317385 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zt8l" event={"ID":"3f6e9609-66b2-416f-a395-b4c947ac726c","Type":"ContainerDied","Data":"9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124"} Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.414934 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.477586 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-catalog-content\") pod \"3f6e9609-66b2-416f-a395-b4c947ac726c\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.477754 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-utilities\") pod \"3f6e9609-66b2-416f-a395-b4c947ac726c\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.478048 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4zl\" (UniqueName: \"kubernetes.io/projected/3f6e9609-66b2-416f-a395-b4c947ac726c-kube-api-access-tx4zl\") pod \"3f6e9609-66b2-416f-a395-b4c947ac726c\" (UID: \"3f6e9609-66b2-416f-a395-b4c947ac726c\") " Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.478844 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-utilities" (OuterVolumeSpecName: "utilities") pod "3f6e9609-66b2-416f-a395-b4c947ac726c" (UID: "3f6e9609-66b2-416f-a395-b4c947ac726c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.508670 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f6e9609-66b2-416f-a395-b4c947ac726c-kube-api-access-tx4zl" (OuterVolumeSpecName: "kube-api-access-tx4zl") pod "3f6e9609-66b2-416f-a395-b4c947ac726c" (UID: "3f6e9609-66b2-416f-a395-b4c947ac726c"). InnerVolumeSpecName "kube-api-access-tx4zl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.585719 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.585755 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tx4zl\" (UniqueName: \"kubernetes.io/projected/3f6e9609-66b2-416f-a395-b4c947ac726c-kube-api-access-tx4zl\") on node \"crc\" DevicePath \"\"" Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.654924 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f6e9609-66b2-416f-a395-b4c947ac726c" (UID: "3f6e9609-66b2-416f-a395-b4c947ac726c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:23:59 crc kubenswrapper[5120]: I1211 16:23:59.686699 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f6e9609-66b2-416f-a395-b4c947ac726c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:24:00 crc kubenswrapper[5120]: E1211 16:24:00.021917 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.336009 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4zt8l" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.336027 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zt8l" event={"ID":"3f6e9609-66b2-416f-a395-b4c947ac726c","Type":"ContainerDied","Data":"eef9d07f0d1f086de69c0c5dbe75e8dd77b2c7e13b269f2e13f052c85751a082"} Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.336106 5120 scope.go:117] "RemoveContainer" containerID="9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.354876 5120 scope.go:117] "RemoveContainer" containerID="29e378189a71521670b0c306777e44f35c54eab04342a04cd84dc69283738897" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.369904 5120 scope.go:117] "RemoveContainer" containerID="29e378189a71521670b0c306777e44f35c54eab04342a04cd84dc69283738897" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.375322 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4zt8l"] Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.380034 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4zt8l"] Dec 11 16:24:00 crc kubenswrapper[5120]: E1211 16:24:00.391209 5120 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_extract-content_redhat-operators-4zt8l_openshift-marketplace_3f6e9609-66b2-416f-a395-b4c947ac726c_0 in pod sandbox eef9d07f0d1f086de69c0c5dbe75e8dd77b2c7e13b269f2e13f052c85751a082: identifier is not a container" containerID="29e378189a71521670b0c306777e44f35c54eab04342a04cd84dc69283738897" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.391240 5120 scope.go:117] "RemoveContainer" containerID="fffc1e5e77c831b83d0cd822e66e3d047c5e147f38a1ed214870d0b23af8e307" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.391265 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e378189a71521670b0c306777e44f35c54eab04342a04cd84dc69283738897"} err="rpc error: code = Unknown desc = failed to delete container k8s_extract-content_redhat-operators-4zt8l_openshift-marketplace_3f6e9609-66b2-416f-a395-b4c947ac726c_0 in pod sandbox eef9d07f0d1f086de69c0c5dbe75e8dd77b2c7e13b269f2e13f052c85751a082: identifier is not a container" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.391298 5120 scope.go:117] "RemoveContainer" containerID="fffc1e5e77c831b83d0cd822e66e3d047c5e147f38a1ed214870d0b23af8e307" Dec 11 16:24:00 crc kubenswrapper[5120]: E1211 16:24:00.406420 5120 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_extract-utilities_redhat-operators-4zt8l_openshift-marketplace_3f6e9609-66b2-416f-a395-b4c947ac726c_0 in pod sandbox eef9d07f0d1f086de69c0c5dbe75e8dd77b2c7e13b269f2e13f052c85751a082 from index: no such id: 'fffc1e5e77c831b83d0cd822e66e3d047c5e147f38a1ed214870d0b23af8e307'" containerID="fffc1e5e77c831b83d0cd822e66e3d047c5e147f38a1ed214870d0b23af8e307" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.406459 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fffc1e5e77c831b83d0cd822e66e3d047c5e147f38a1ed214870d0b23af8e307"} err="rpc error: code = Unknown desc = failed to delete container k8s_extract-utilities_redhat-operators-4zt8l_openshift-marketplace_3f6e9609-66b2-416f-a395-b4c947ac726c_0 in pod sandbox eef9d07f0d1f086de69c0c5dbe75e8dd77b2c7e13b269f2e13f052c85751a082 from index: no such id: 'fffc1e5e77c831b83d0cd822e66e3d047c5e147f38a1ed214870d0b23af8e307'" Dec 11 16:24:00 crc kubenswrapper[5120]: I1211 16:24:00.406498 5120 scope.go:117] "RemoveContainer" containerID="9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124" Dec 11 16:24:00 crc kubenswrapper[5120]: E1211 16:24:00.406908 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124\": container with ID starting with 9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124 not found: ID does not exist" containerID="9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124" Dec 11 16:24:00 crc kubenswrapper[5120]: E1211 16:24:00.406948 5120 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124\": rpc error: code = NotFound desc = could not find container \"9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124\": container with ID starting with 9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124 not found: ID does not exist" containerID="9c66fdd4836f750ccf429c210700fb861e8d5b0b16a463d00325c93ab57d1124" Dec 11 16:24:01 crc kubenswrapper[5120]: I1211 16:24:01.029719 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f6e9609-66b2-416f-a395-b4c947ac726c" path="/var/lib/kubelet/pods/3f6e9609-66b2-416f-a395-b4c947ac726c/volumes" Dec 11 16:24:02 crc kubenswrapper[5120]: E1211 16:24:02.022587 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:24:12 crc kubenswrapper[5120]: E1211 16:24:12.024214 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:24:15 crc kubenswrapper[5120]: E1211 16:24:15.023104 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:24:24 crc kubenswrapper[5120]: E1211 16:24:24.023638 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:24:28 crc kubenswrapper[5120]: I1211 16:24:28.717760 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:24:28 crc kubenswrapper[5120]: I1211 16:24:28.718598 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:24:30 crc kubenswrapper[5120]: E1211 16:24:30.107676 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:24:30 crc kubenswrapper[5120]: E1211 16:24:30.107834 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b4bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-vhlbh_service-telemetry(89d28d44-5839-49aa-8893-f6eb0e3c79d7): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:24:30 crc kubenswrapper[5120]: E1211 16:24:30.109725 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:24:35 crc kubenswrapper[5120]: E1211 16:24:35.022909 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:24:42 crc kubenswrapper[5120]: E1211 16:24:42.022531 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:24:46 crc kubenswrapper[5120]: E1211 16:24:46.092778 5120 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Dec 11 16:24:46 crc kubenswrapper[5120]: E1211 16:24:46.094237 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzkhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-nxhlp_service-telemetry(847525ea-e1cb-43ed-98e3-91baecb73494): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown" logger="UnhandledError" Dec 11 16:24:46 crc kubenswrapper[5120]: E1211 16:24:46.095550 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:24:54 crc kubenswrapper[5120]: E1211 16:24:54.022327 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.651480 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kj2bx/must-gather-f5cwd"] Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.653011 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerName="extract-utilities" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.653047 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerName="extract-utilities" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.653079 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerName="extract-content" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.653093 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerName="extract-content" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.653132 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerName="registry-server" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.653145 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerName="registry-server" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.653378 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f6e9609-66b2-416f-a395-b4c947ac726c" containerName="registry-server" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.674077 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kj2bx/must-gather-f5cwd"] Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.674254 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.677237 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-kj2bx\"/\"kube-root-ca.crt\"" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.677528 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-kj2bx\"/\"default-dockercfg-jrscx\"" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.677708 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-kj2bx\"/\"openshift-service-ca.crt\"" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.809714 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xt2p\" (UniqueName: \"kubernetes.io/projected/2a5cf5d5-9069-4f40-afc8-984315ba76ac-kube-api-access-6xt2p\") pod \"must-gather-f5cwd\" (UID: \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\") " pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.809801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a5cf5d5-9069-4f40-afc8-984315ba76ac-must-gather-output\") pod \"must-gather-f5cwd\" (UID: \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\") " pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.911669 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a5cf5d5-9069-4f40-afc8-984315ba76ac-must-gather-output\") pod \"must-gather-f5cwd\" (UID: \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\") " pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.911771 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6xt2p\" (UniqueName: \"kubernetes.io/projected/2a5cf5d5-9069-4f40-afc8-984315ba76ac-kube-api-access-6xt2p\") pod \"must-gather-f5cwd\" (UID: \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\") " pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.912229 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a5cf5d5-9069-4f40-afc8-984315ba76ac-must-gather-output\") pod \"must-gather-f5cwd\" (UID: \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\") " pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:24:57 crc kubenswrapper[5120]: I1211 16:24:57.932261 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xt2p\" (UniqueName: \"kubernetes.io/projected/2a5cf5d5-9069-4f40-afc8-984315ba76ac-kube-api-access-6xt2p\") pod \"must-gather-f5cwd\" (UID: \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\") " pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.000284 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:24:58 crc kubenswrapper[5120]: E1211 16:24:58.023118 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.205495 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kj2bx/must-gather-f5cwd"] Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.717382 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.717695 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.717742 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.718386 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f84d3fc3de69e8f8987f2cc1a303015749ecefd934dc31b0a7cafd353cfaaf14"} pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.718449 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" containerID="cri-o://f84d3fc3de69e8f8987f2cc1a303015749ecefd934dc31b0a7cafd353cfaaf14" gracePeriod=600 Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.911979 5120 generic.go:358] "Generic (PLEG): container finished" podID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerID="f84d3fc3de69e8f8987f2cc1a303015749ecefd934dc31b0a7cafd353cfaaf14" exitCode=0 Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.912084 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerDied","Data":"f84d3fc3de69e8f8987f2cc1a303015749ecefd934dc31b0a7cafd353cfaaf14"} Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.912200 5120 scope.go:117] "RemoveContainer" containerID="be54f60168a07a87c59bd7822a9702781ae5fe436bdb09971bcc65a678e8872b" Dec 11 16:24:58 crc kubenswrapper[5120]: I1211 16:24:58.914880 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" event={"ID":"2a5cf5d5-9069-4f40-afc8-984315ba76ac","Type":"ContainerStarted","Data":"15a1681f2231e5db11ce921d97c9212030398226ae2707c1878b030ee88b7038"} Dec 11 16:24:59 crc kubenswrapper[5120]: I1211 16:24:59.923882 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" event={"ID":"e868a29f-b837-4513-ad30-f5b6c4354a09","Type":"ContainerStarted","Data":"c6ec30d6f12f191f820c359606c40e5883a679db5747702ee421038da50d47ca"} Dec 11 16:25:03 crc kubenswrapper[5120]: I1211 16:25:03.949785 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" event={"ID":"2a5cf5d5-9069-4f40-afc8-984315ba76ac","Type":"ContainerStarted","Data":"923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd"} Dec 11 16:25:03 crc kubenswrapper[5120]: I1211 16:25:03.950443 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" event={"ID":"2a5cf5d5-9069-4f40-afc8-984315ba76ac","Type":"ContainerStarted","Data":"e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f"} Dec 11 16:25:03 crc kubenswrapper[5120]: I1211 16:25:03.970368 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" podStartSLOduration=2.005718962 podStartE2EDuration="6.970340202s" podCreationTimestamp="2025-12-11 16:24:57 +0000 UTC" firstStartedPulling="2025-12-11 16:24:58.212574058 +0000 UTC m=+1447.466877389" lastFinishedPulling="2025-12-11 16:25:03.177195298 +0000 UTC m=+1452.431498629" observedRunningTime="2025-12-11 16:25:03.966183249 +0000 UTC m=+1453.220486620" watchObservedRunningTime="2025-12-11 16:25:03.970340202 +0000 UTC m=+1453.224643583" Dec 11 16:25:06 crc kubenswrapper[5120]: E1211 16:25:06.022336 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:25:07 crc kubenswrapper[5120]: I1211 16:25:07.688871 5120 ???:1] "http: TLS handshake error from 192.168.126.11:49362: no serving certificate available for the kubelet" Dec 11 16:25:11 crc kubenswrapper[5120]: E1211 16:25:11.026543 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:25:18 crc kubenswrapper[5120]: E1211 16:25:18.022890 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:25:25 crc kubenswrapper[5120]: E1211 16:25:25.023426 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:25:27 crc kubenswrapper[5120]: I1211 16:25:27.962730 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mvqq6"] Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.017370 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvqq6"] Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.017573 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.092364 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-catalog-content\") pod \"community-operators-mvqq6\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.092778 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-utilities\") pod \"community-operators-mvqq6\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.092860 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7q7g\" (UniqueName: \"kubernetes.io/projected/2ad81319-1779-4f6d-a675-2836d0075e01-kube-api-access-x7q7g\") pod \"community-operators-mvqq6\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.193464 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-catalog-content\") pod \"community-operators-mvqq6\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.193517 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-utilities\") pod \"community-operators-mvqq6\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.193574 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7q7g\" (UniqueName: \"kubernetes.io/projected/2ad81319-1779-4f6d-a675-2836d0075e01-kube-api-access-x7q7g\") pod \"community-operators-mvqq6\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.193945 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-catalog-content\") pod \"community-operators-mvqq6\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.194302 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-utilities\") pod \"community-operators-mvqq6\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.214241 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7q7g\" (UniqueName: \"kubernetes.io/projected/2ad81319-1779-4f6d-a675-2836d0075e01-kube-api-access-x7q7g\") pod \"community-operators-mvqq6\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.348414 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:28 crc kubenswrapper[5120]: I1211 16:25:28.573819 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvqq6"] Dec 11 16:25:29 crc kubenswrapper[5120]: I1211 16:25:29.105124 5120 generic.go:358] "Generic (PLEG): container finished" podID="2ad81319-1779-4f6d-a675-2836d0075e01" containerID="61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b" exitCode=0 Dec 11 16:25:29 crc kubenswrapper[5120]: I1211 16:25:29.105198 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvqq6" event={"ID":"2ad81319-1779-4f6d-a675-2836d0075e01","Type":"ContainerDied","Data":"61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b"} Dec 11 16:25:29 crc kubenswrapper[5120]: I1211 16:25:29.105253 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvqq6" event={"ID":"2ad81319-1779-4f6d-a675-2836d0075e01","Type":"ContainerStarted","Data":"55c84d03298fcc23eb8ca99ca4f2ef9a8847c30ac28f1b015b65423ec3d3a7c9"} Dec 11 16:25:30 crc kubenswrapper[5120]: I1211 16:25:30.127193 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvqq6" event={"ID":"2ad81319-1779-4f6d-a675-2836d0075e01","Type":"ContainerStarted","Data":"92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8"} Dec 11 16:25:31 crc kubenswrapper[5120]: E1211 16:25:31.028726 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:25:31 crc kubenswrapper[5120]: I1211 16:25:31.133258 5120 generic.go:358] "Generic (PLEG): container finished" podID="2ad81319-1779-4f6d-a675-2836d0075e01" containerID="92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8" exitCode=0 Dec 11 16:25:31 crc kubenswrapper[5120]: I1211 16:25:31.133305 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvqq6" event={"ID":"2ad81319-1779-4f6d-a675-2836d0075e01","Type":"ContainerDied","Data":"92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8"} Dec 11 16:25:32 crc kubenswrapper[5120]: I1211 16:25:32.140096 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvqq6" event={"ID":"2ad81319-1779-4f6d-a675-2836d0075e01","Type":"ContainerStarted","Data":"7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730"} Dec 11 16:25:32 crc kubenswrapper[5120]: I1211 16:25:32.162475 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mvqq6" podStartSLOduration=4.310076232 podStartE2EDuration="5.16245341s" podCreationTimestamp="2025-12-11 16:25:27 +0000 UTC" firstStartedPulling="2025-12-11 16:25:29.107014411 +0000 UTC m=+1478.361317742" lastFinishedPulling="2025-12-11 16:25:29.959391589 +0000 UTC m=+1479.213694920" observedRunningTime="2025-12-11 16:25:32.15988488 +0000 UTC m=+1481.414188221" watchObservedRunningTime="2025-12-11 16:25:32.16245341 +0000 UTC m=+1481.416756741" Dec 11 16:25:38 crc kubenswrapper[5120]: E1211 16:25:38.022310 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:25:38 crc kubenswrapper[5120]: I1211 16:25:38.349332 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:38 crc kubenswrapper[5120]: I1211 16:25:38.349419 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:38 crc kubenswrapper[5120]: I1211 16:25:38.399678 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:39 crc kubenswrapper[5120]: I1211 16:25:39.230884 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:39 crc kubenswrapper[5120]: I1211 16:25:39.266930 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvqq6"] Dec 11 16:25:41 crc kubenswrapper[5120]: I1211 16:25:41.198584 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mvqq6" podUID="2ad81319-1779-4f6d-a675-2836d0075e01" containerName="registry-server" containerID="cri-o://7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730" gracePeriod=2 Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.066570 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.184382 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-catalog-content\") pod \"2ad81319-1779-4f6d-a675-2836d0075e01\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.184514 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7q7g\" (UniqueName: \"kubernetes.io/projected/2ad81319-1779-4f6d-a675-2836d0075e01-kube-api-access-x7q7g\") pod \"2ad81319-1779-4f6d-a675-2836d0075e01\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.184615 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-utilities\") pod \"2ad81319-1779-4f6d-a675-2836d0075e01\" (UID: \"2ad81319-1779-4f6d-a675-2836d0075e01\") " Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.185561 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-utilities" (OuterVolumeSpecName: "utilities") pod "2ad81319-1779-4f6d-a675-2836d0075e01" (UID: "2ad81319-1779-4f6d-a675-2836d0075e01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.197352 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad81319-1779-4f6d-a675-2836d0075e01-kube-api-access-x7q7g" (OuterVolumeSpecName: "kube-api-access-x7q7g") pod "2ad81319-1779-4f6d-a675-2836d0075e01" (UID: "2ad81319-1779-4f6d-a675-2836d0075e01"). InnerVolumeSpecName "kube-api-access-x7q7g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.206702 5120 generic.go:358] "Generic (PLEG): container finished" podID="2ad81319-1779-4f6d-a675-2836d0075e01" containerID="7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730" exitCode=0 Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.206857 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvqq6" event={"ID":"2ad81319-1779-4f6d-a675-2836d0075e01","Type":"ContainerDied","Data":"7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730"} Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.206884 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvqq6" event={"ID":"2ad81319-1779-4f6d-a675-2836d0075e01","Type":"ContainerDied","Data":"55c84d03298fcc23eb8ca99ca4f2ef9a8847c30ac28f1b015b65423ec3d3a7c9"} Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.206900 5120 scope.go:117] "RemoveContainer" containerID="7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.207027 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvqq6" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.224114 5120 scope.go:117] "RemoveContainer" containerID="92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.237169 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ad81319-1779-4f6d-a675-2836d0075e01" (UID: "2ad81319-1779-4f6d-a675-2836d0075e01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.242798 5120 scope.go:117] "RemoveContainer" containerID="61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.263637 5120 scope.go:117] "RemoveContainer" containerID="7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730" Dec 11 16:25:42 crc kubenswrapper[5120]: E1211 16:25:42.263983 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730\": container with ID starting with 7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730 not found: ID does not exist" containerID="7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.264017 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730"} err="failed to get container status \"7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730\": rpc error: code = NotFound desc = could not find container \"7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730\": container with ID starting with 7eb50fbce867eaf052b8ab15b831b876e3d8047330eb44ab72ea243910a5a730 not found: ID does not exist" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.264036 5120 scope.go:117] "RemoveContainer" containerID="92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8" Dec 11 16:25:42 crc kubenswrapper[5120]: E1211 16:25:42.264256 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8\": container with ID starting with 92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8 not found: ID does not exist" containerID="92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.264293 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8"} err="failed to get container status \"92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8\": rpc error: code = NotFound desc = could not find container \"92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8\": container with ID starting with 92fa03c3e88d9965239df7683ae97584fc84b5d181924ff0ee9605fd1a9b24d8 not found: ID does not exist" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.264308 5120 scope.go:117] "RemoveContainer" containerID="61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b" Dec 11 16:25:42 crc kubenswrapper[5120]: E1211 16:25:42.264481 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b\": container with ID starting with 61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b not found: ID does not exist" containerID="61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.264501 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b"} err="failed to get container status \"61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b\": rpc error: code = NotFound desc = could not find container \"61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b\": container with ID starting with 61f00dc0d4395fbbf2876fde24ef3dc27d128efcadcb5259da0eeb665a02520b not found: ID does not exist" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.285729 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x7q7g\" (UniqueName: \"kubernetes.io/projected/2ad81319-1779-4f6d-a675-2836d0075e01-kube-api-access-x7q7g\") on node \"crc\" DevicePath \"\"" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.285757 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.285767 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad81319-1779-4f6d-a675-2836d0075e01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.533623 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvqq6"] Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.537773 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mvqq6"] Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.712401 5120 ???:1] "http: TLS handshake error from 192.168.126.11:36162: no serving certificate available for the kubelet" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.882026 5120 ???:1] "http: TLS handshake error from 192.168.126.11:36176: no serving certificate available for the kubelet" Dec 11 16:25:42 crc kubenswrapper[5120]: I1211 16:25:42.922133 5120 ???:1] "http: TLS handshake error from 192.168.126.11:36186: no serving certificate available for the kubelet" Dec 11 16:25:43 crc kubenswrapper[5120]: I1211 16:25:43.032725 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad81319-1779-4f6d-a675-2836d0075e01" path="/var/lib/kubelet/pods/2ad81319-1779-4f6d-a675-2836d0075e01/volumes" Dec 11 16:25:45 crc kubenswrapper[5120]: E1211 16:25:45.022248 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:25:51 crc kubenswrapper[5120]: E1211 16:25:51.027989 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:25:51 crc kubenswrapper[5120]: I1211 16:25:51.475022 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:25:51 crc kubenswrapper[5120]: I1211 16:25:51.478668 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzwn6_7143452f-c193-4dbf-872c-a3ae9245f158/kube-multus/0.log" Dec 11 16:25:51 crc kubenswrapper[5120]: I1211 16:25:51.479461 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:25:51 crc kubenswrapper[5120]: I1211 16:25:51.483344 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 11 16:25:54 crc kubenswrapper[5120]: I1211 16:25:54.522638 5120 ???:1] "http: TLS handshake error from 192.168.126.11:54680: no serving certificate available for the kubelet" Dec 11 16:25:54 crc kubenswrapper[5120]: I1211 16:25:54.639147 5120 ???:1] "http: TLS handshake error from 192.168.126.11:54684: no serving certificate available for the kubelet" Dec 11 16:25:54 crc kubenswrapper[5120]: I1211 16:25:54.690199 5120 ???:1] "http: TLS handshake error from 192.168.126.11:54692: no serving certificate available for the kubelet" Dec 11 16:25:59 crc kubenswrapper[5120]: E1211 16:25:59.033559 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:26:06 crc kubenswrapper[5120]: E1211 16:26:06.022441 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.236027 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39766: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.386667 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39774: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.411876 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39780: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.418536 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39784: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.533571 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39788: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.567837 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39792: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.590775 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39804: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.713901 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39820: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.873079 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39830: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.880330 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39836: no serving certificate available for the kubelet" Dec 11 16:26:09 crc kubenswrapper[5120]: I1211 16:26:09.887587 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39850: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.021066 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39866: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.021897 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39882: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: E1211 16:26:10.023845 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.065214 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39894: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.183453 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39908: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.426105 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39918: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.434475 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39920: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.482017 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39934: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.632383 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39950: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.646009 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39960: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.698315 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39976: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.858282 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39980: no serving certificate available for the kubelet" Dec 11 16:26:10 crc kubenswrapper[5120]: I1211 16:26:10.999568 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39984: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.034642 5120 ???:1] "http: TLS handshake error from 192.168.126.11:39990: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.039551 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40000: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.200370 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40016: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.209444 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40024: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.238674 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40036: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.352498 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40046: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.517102 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40048: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.523409 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40058: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.556565 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40070: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.719622 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40074: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.720096 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40076: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.728393 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40086: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.756102 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40094: no serving certificate available for the kubelet" Dec 11 16:26:11 crc kubenswrapper[5120]: I1211 16:26:11.890767 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40110: no serving certificate available for the kubelet" Dec 11 16:26:12 crc kubenswrapper[5120]: I1211 16:26:12.052709 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40112: no serving certificate available for the kubelet" Dec 11 16:26:12 crc kubenswrapper[5120]: I1211 16:26:12.063196 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40114: no serving certificate available for the kubelet" Dec 11 16:26:12 crc kubenswrapper[5120]: I1211 16:26:12.076003 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40128: no serving certificate available for the kubelet" Dec 11 16:26:12 crc kubenswrapper[5120]: I1211 16:26:12.275607 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40138: no serving certificate available for the kubelet" Dec 11 16:26:12 crc kubenswrapper[5120]: I1211 16:26:12.276199 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40154: no serving certificate available for the kubelet" Dec 11 16:26:12 crc kubenswrapper[5120]: I1211 16:26:12.279822 5120 ???:1] "http: TLS handshake error from 192.168.126.11:40156: no serving certificate available for the kubelet" Dec 11 16:26:17 crc kubenswrapper[5120]: E1211 16:26:17.022711 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:26:21 crc kubenswrapper[5120]: E1211 16:26:21.028248 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.626893 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w756n"] Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.627736 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ad81319-1779-4f6d-a675-2836d0075e01" containerName="extract-utilities" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.627748 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad81319-1779-4f6d-a675-2836d0075e01" containerName="extract-utilities" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.627764 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ad81319-1779-4f6d-a675-2836d0075e01" containerName="registry-server" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.627770 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad81319-1779-4f6d-a675-2836d0075e01" containerName="registry-server" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.627781 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ad81319-1779-4f6d-a675-2836d0075e01" containerName="extract-content" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.627788 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad81319-1779-4f6d-a675-2836d0075e01" containerName="extract-content" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.627875 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ad81319-1779-4f6d-a675-2836d0075e01" containerName="registry-server" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.844085 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w756n"] Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.844278 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.982129 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-catalog-content\") pod \"certified-operators-w756n\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.982202 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlgd\" (UniqueName: \"kubernetes.io/projected/651a94bf-0f00-459c-884c-c927ae1f0164-kube-api-access-8nlgd\") pod \"certified-operators-w756n\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:22 crc kubenswrapper[5120]: I1211 16:26:22.982246 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-utilities\") pod \"certified-operators-w756n\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.083786 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-catalog-content\") pod \"certified-operators-w756n\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.083841 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nlgd\" (UniqueName: \"kubernetes.io/projected/651a94bf-0f00-459c-884c-c927ae1f0164-kube-api-access-8nlgd\") pod \"certified-operators-w756n\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.083888 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-utilities\") pod \"certified-operators-w756n\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.084444 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-catalog-content\") pod \"certified-operators-w756n\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.084483 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-utilities\") pod \"certified-operators-w756n\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.103036 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nlgd\" (UniqueName: \"kubernetes.io/projected/651a94bf-0f00-459c-884c-c927ae1f0164-kube-api-access-8nlgd\") pod \"certified-operators-w756n\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.158682 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.393922 5120 ???:1] "http: TLS handshake error from 192.168.126.11:59574: no serving certificate available for the kubelet" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.407181 5120 ???:1] "http: TLS handshake error from 192.168.126.11:59588: no serving certificate available for the kubelet" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.560331 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w756n"] Dec 11 16:26:23 crc kubenswrapper[5120]: W1211 16:26:23.565435 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod651a94bf_0f00_459c_884c_c927ae1f0164.slice/crio-db07563648af5790be7086e93f5842dfb00a9d28d8ed54f2764a5c4944d3c7f7 WatchSource:0}: Error finding container db07563648af5790be7086e93f5842dfb00a9d28d8ed54f2764a5c4944d3c7f7: Status 404 returned error can't find the container with id db07563648af5790be7086e93f5842dfb00a9d28d8ed54f2764a5c4944d3c7f7 Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.590645 5120 ???:1] "http: TLS handshake error from 192.168.126.11:59596: no serving certificate available for the kubelet" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.662979 5120 ???:1] "http: TLS handshake error from 192.168.126.11:59606: no serving certificate available for the kubelet" Dec 11 16:26:23 crc kubenswrapper[5120]: I1211 16:26:23.799338 5120 ???:1] "http: TLS handshake error from 192.168.126.11:59612: no serving certificate available for the kubelet" Dec 11 16:26:24 crc kubenswrapper[5120]: I1211 16:26:24.490351 5120 generic.go:358] "Generic (PLEG): container finished" podID="651a94bf-0f00-459c-884c-c927ae1f0164" containerID="743797792b2cf898aa440b40c169a0b5f120df9dd16ec9c1de5277baa51e8e44" exitCode=0 Dec 11 16:26:24 crc kubenswrapper[5120]: I1211 16:26:24.490394 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w756n" event={"ID":"651a94bf-0f00-459c-884c-c927ae1f0164","Type":"ContainerDied","Data":"743797792b2cf898aa440b40c169a0b5f120df9dd16ec9c1de5277baa51e8e44"} Dec 11 16:26:24 crc kubenswrapper[5120]: I1211 16:26:24.490442 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w756n" event={"ID":"651a94bf-0f00-459c-884c-c927ae1f0164","Type":"ContainerStarted","Data":"db07563648af5790be7086e93f5842dfb00a9d28d8ed54f2764a5c4944d3c7f7"} Dec 11 16:26:25 crc kubenswrapper[5120]: I1211 16:26:25.498982 5120 generic.go:358] "Generic (PLEG): container finished" podID="651a94bf-0f00-459c-884c-c927ae1f0164" containerID="13a542426f5d1065fe9a61c2c08667633302480cee35e4186dd9d5128a729d0f" exitCode=0 Dec 11 16:26:25 crc kubenswrapper[5120]: I1211 16:26:25.499052 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w756n" event={"ID":"651a94bf-0f00-459c-884c-c927ae1f0164","Type":"ContainerDied","Data":"13a542426f5d1065fe9a61c2c08667633302480cee35e4186dd9d5128a729d0f"} Dec 11 16:26:26 crc kubenswrapper[5120]: I1211 16:26:26.506323 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w756n" event={"ID":"651a94bf-0f00-459c-884c-c927ae1f0164","Type":"ContainerStarted","Data":"e7f6ad931837723188b78ad00c4f54036178ef18e9c20c23a5cd796969c1c908"} Dec 11 16:26:26 crc kubenswrapper[5120]: I1211 16:26:26.528892 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w756n" podStartSLOduration=3.932458627 podStartE2EDuration="4.528877307s" podCreationTimestamp="2025-12-11 16:26:22 +0000 UTC" firstStartedPulling="2025-12-11 16:26:24.493103446 +0000 UTC m=+1533.747406817" lastFinishedPulling="2025-12-11 16:26:25.089522166 +0000 UTC m=+1534.343825497" observedRunningTime="2025-12-11 16:26:26.523584108 +0000 UTC m=+1535.777887439" watchObservedRunningTime="2025-12-11 16:26:26.528877307 +0000 UTC m=+1535.783180639" Dec 11 16:26:30 crc kubenswrapper[5120]: E1211 16:26:30.022555 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:26:33 crc kubenswrapper[5120]: I1211 16:26:33.159113 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:33 crc kubenswrapper[5120]: I1211 16:26:33.159397 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:33 crc kubenswrapper[5120]: I1211 16:26:33.210702 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:33 crc kubenswrapper[5120]: I1211 16:26:33.623857 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:33 crc kubenswrapper[5120]: I1211 16:26:33.682543 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w756n"] Dec 11 16:26:34 crc kubenswrapper[5120]: I1211 16:26:34.022028 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 16:26:34 crc kubenswrapper[5120]: E1211 16:26:34.022780 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:26:35 crc kubenswrapper[5120]: I1211 16:26:35.565103 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w756n" podUID="651a94bf-0f00-459c-884c-c927ae1f0164" containerName="registry-server" containerID="cri-o://e7f6ad931837723188b78ad00c4f54036178ef18e9c20c23a5cd796969c1c908" gracePeriod=2 Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.586342 5120 generic.go:358] "Generic (PLEG): container finished" podID="651a94bf-0f00-459c-884c-c927ae1f0164" containerID="e7f6ad931837723188b78ad00c4f54036178ef18e9c20c23a5cd796969c1c908" exitCode=0 Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.586456 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w756n" event={"ID":"651a94bf-0f00-459c-884c-c927ae1f0164","Type":"ContainerDied","Data":"e7f6ad931837723188b78ad00c4f54036178ef18e9c20c23a5cd796969c1c908"} Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.692648 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.779326 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-utilities\") pod \"651a94bf-0f00-459c-884c-c927ae1f0164\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.779623 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nlgd\" (UniqueName: \"kubernetes.io/projected/651a94bf-0f00-459c-884c-c927ae1f0164-kube-api-access-8nlgd\") pod \"651a94bf-0f00-459c-884c-c927ae1f0164\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.779782 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-catalog-content\") pod \"651a94bf-0f00-459c-884c-c927ae1f0164\" (UID: \"651a94bf-0f00-459c-884c-c927ae1f0164\") " Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.781771 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-utilities" (OuterVolumeSpecName: "utilities") pod "651a94bf-0f00-459c-884c-c927ae1f0164" (UID: "651a94bf-0f00-459c-884c-c927ae1f0164"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.789716 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651a94bf-0f00-459c-884c-c927ae1f0164-kube-api-access-8nlgd" (OuterVolumeSpecName: "kube-api-access-8nlgd") pod "651a94bf-0f00-459c-884c-c927ae1f0164" (UID: "651a94bf-0f00-459c-884c-c927ae1f0164"). InnerVolumeSpecName "kube-api-access-8nlgd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.821063 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "651a94bf-0f00-459c-884c-c927ae1f0164" (UID: "651a94bf-0f00-459c-884c-c927ae1f0164"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.882378 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.882455 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a94bf-0f00-459c-884c-c927ae1f0164-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 16:26:36 crc kubenswrapper[5120]: I1211 16:26:36.882475 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nlgd\" (UniqueName: \"kubernetes.io/projected/651a94bf-0f00-459c-884c-c927ae1f0164-kube-api-access-8nlgd\") on node \"crc\" DevicePath \"\"" Dec 11 16:26:37 crc kubenswrapper[5120]: I1211 16:26:37.596714 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w756n" event={"ID":"651a94bf-0f00-459c-884c-c927ae1f0164","Type":"ContainerDied","Data":"db07563648af5790be7086e93f5842dfb00a9d28d8ed54f2764a5c4944d3c7f7"} Dec 11 16:26:37 crc kubenswrapper[5120]: I1211 16:26:37.596763 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w756n" Dec 11 16:26:37 crc kubenswrapper[5120]: I1211 16:26:37.596792 5120 scope.go:117] "RemoveContainer" containerID="e7f6ad931837723188b78ad00c4f54036178ef18e9c20c23a5cd796969c1c908" Dec 11 16:26:37 crc kubenswrapper[5120]: I1211 16:26:37.626165 5120 scope.go:117] "RemoveContainer" containerID="13a542426f5d1065fe9a61c2c08667633302480cee35e4186dd9d5128a729d0f" Dec 11 16:26:37 crc kubenswrapper[5120]: I1211 16:26:37.631198 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w756n"] Dec 11 16:26:37 crc kubenswrapper[5120]: I1211 16:26:37.635600 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w756n"] Dec 11 16:26:37 crc kubenswrapper[5120]: I1211 16:26:37.671197 5120 scope.go:117] "RemoveContainer" containerID="743797792b2cf898aa440b40c169a0b5f120df9dd16ec9c1de5277baa51e8e44" Dec 11 16:26:39 crc kubenswrapper[5120]: I1211 16:26:39.034648 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651a94bf-0f00-459c-884c-c927ae1f0164" path="/var/lib/kubelet/pods/651a94bf-0f00-459c-884c-c927ae1f0164/volumes" Dec 11 16:26:44 crc kubenswrapper[5120]: E1211 16:26:44.023205 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:26:47 crc kubenswrapper[5120]: E1211 16:26:47.025663 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:26:56 crc kubenswrapper[5120]: E1211 16:26:56.022565 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:26:58 crc kubenswrapper[5120]: I1211 16:26:58.771466 5120 generic.go:358] "Generic (PLEG): container finished" podID="2a5cf5d5-9069-4f40-afc8-984315ba76ac" containerID="e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f" exitCode=0 Dec 11 16:26:58 crc kubenswrapper[5120]: I1211 16:26:58.771626 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" event={"ID":"2a5cf5d5-9069-4f40-afc8-984315ba76ac","Type":"ContainerDied","Data":"e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f"} Dec 11 16:26:58 crc kubenswrapper[5120]: I1211 16:26:58.772708 5120 scope.go:117] "RemoveContainer" containerID="e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f" Dec 11 16:27:02 crc kubenswrapper[5120]: E1211 16:27:02.022509 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:27:07 crc kubenswrapper[5120]: I1211 16:27:07.710039 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51570: no serving certificate available for the kubelet" Dec 11 16:27:07 crc kubenswrapper[5120]: I1211 16:27:07.917661 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51576: no serving certificate available for the kubelet" Dec 11 16:27:07 crc kubenswrapper[5120]: I1211 16:27:07.927904 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51592: no serving certificate available for the kubelet" Dec 11 16:27:07 crc kubenswrapper[5120]: I1211 16:27:07.953757 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51600: no serving certificate available for the kubelet" Dec 11 16:27:07 crc kubenswrapper[5120]: I1211 16:27:07.966185 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51602: no serving certificate available for the kubelet" Dec 11 16:27:07 crc kubenswrapper[5120]: I1211 16:27:07.980682 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51610: no serving certificate available for the kubelet" Dec 11 16:27:07 crc kubenswrapper[5120]: I1211 16:27:07.992783 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51622: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.009990 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51628: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.022449 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51630: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.176965 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51642: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.191081 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51654: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.220285 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51656: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.233074 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51662: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.246253 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51668: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.256049 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51684: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.267254 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51700: no serving certificate available for the kubelet" Dec 11 16:27:08 crc kubenswrapper[5120]: I1211 16:27:08.276112 5120 ???:1] "http: TLS handshake error from 192.168.126.11:51702: no serving certificate available for the kubelet" Dec 11 16:27:10 crc kubenswrapper[5120]: E1211 16:27:10.021871 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.313926 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kj2bx/must-gather-f5cwd"] Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.314706 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" podUID="2a5cf5d5-9069-4f40-afc8-984315ba76ac" containerName="copy" containerID="cri-o://923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd" gracePeriod=2 Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.316728 5120 status_manager.go:895] "Failed to get status for pod" podUID="2a5cf5d5-9069-4f40-afc8-984315ba76ac" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" err="pods \"must-gather-f5cwd\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-kj2bx\": no relationship found between node 'crc' and this object" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.319889 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kj2bx/must-gather-f5cwd"] Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.677804 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kj2bx_must-gather-f5cwd_2a5cf5d5-9069-4f40-afc8-984315ba76ac/copy/0.log" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.678536 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.680078 5120 status_manager.go:895] "Failed to get status for pod" podUID="2a5cf5d5-9069-4f40-afc8-984315ba76ac" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" err="pods \"must-gather-f5cwd\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-kj2bx\": no relationship found between node 'crc' and this object" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.757879 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a5cf5d5-9069-4f40-afc8-984315ba76ac-must-gather-output\") pod \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\" (UID: \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\") " Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.757943 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xt2p\" (UniqueName: \"kubernetes.io/projected/2a5cf5d5-9069-4f40-afc8-984315ba76ac-kube-api-access-6xt2p\") pod \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\" (UID: \"2a5cf5d5-9069-4f40-afc8-984315ba76ac\") " Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.763635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5cf5d5-9069-4f40-afc8-984315ba76ac-kube-api-access-6xt2p" (OuterVolumeSpecName: "kube-api-access-6xt2p") pod "2a5cf5d5-9069-4f40-afc8-984315ba76ac" (UID: "2a5cf5d5-9069-4f40-afc8-984315ba76ac"). InnerVolumeSpecName "kube-api-access-6xt2p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.796173 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5cf5d5-9069-4f40-afc8-984315ba76ac-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2a5cf5d5-9069-4f40-afc8-984315ba76ac" (UID: "2a5cf5d5-9069-4f40-afc8-984315ba76ac"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.858865 5120 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a5cf5d5-9069-4f40-afc8-984315ba76ac-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.858897 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6xt2p\" (UniqueName: \"kubernetes.io/projected/2a5cf5d5-9069-4f40-afc8-984315ba76ac-kube-api-access-6xt2p\") on node \"crc\" DevicePath \"\"" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.875946 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kj2bx_must-gather-f5cwd_2a5cf5d5-9069-4f40-afc8-984315ba76ac/copy/0.log" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.876395 5120 generic.go:358] "Generic (PLEG): container finished" podID="2a5cf5d5-9069-4f40-afc8-984315ba76ac" containerID="923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd" exitCode=143 Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.876474 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.876472 5120 scope.go:117] "RemoveContainer" containerID="923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.878393 5120 status_manager.go:895] "Failed to get status for pod" podUID="2a5cf5d5-9069-4f40-afc8-984315ba76ac" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" err="pods \"must-gather-f5cwd\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-kj2bx\": no relationship found between node 'crc' and this object" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.900117 5120 scope.go:117] "RemoveContainer" containerID="e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.900417 5120 status_manager.go:895] "Failed to get status for pod" podUID="2a5cf5d5-9069-4f40-afc8-984315ba76ac" pod="openshift-must-gather-kj2bx/must-gather-f5cwd" err="pods \"must-gather-f5cwd\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-kj2bx\": no relationship found between node 'crc' and this object" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.959561 5120 scope.go:117] "RemoveContainer" containerID="923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd" Dec 11 16:27:13 crc kubenswrapper[5120]: E1211 16:27:13.960190 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd\": container with ID starting with 923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd not found: ID does not exist" containerID="923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.960269 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd"} err="failed to get container status \"923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd\": rpc error: code = NotFound desc = could not find container \"923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd\": container with ID starting with 923b9dadc94c34e08cc8382550c434efc2d753d0d064b7face89e77d732219cd not found: ID does not exist" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.960299 5120 scope.go:117] "RemoveContainer" containerID="e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f" Dec 11 16:27:13 crc kubenswrapper[5120]: E1211 16:27:13.960677 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f\": container with ID starting with e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f not found: ID does not exist" containerID="e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f" Dec 11 16:27:13 crc kubenswrapper[5120]: I1211 16:27:13.960717 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f"} err="failed to get container status \"e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f\": rpc error: code = NotFound desc = could not find container \"e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f\": container with ID starting with e89b16ffc410e506074d53a5f922cd16385c3da3b402be4450e0071ecb15665f not found: ID does not exist" Dec 11 16:27:14 crc kubenswrapper[5120]: E1211 16:27:14.022950 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:27:15 crc kubenswrapper[5120]: I1211 16:27:15.035355 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5cf5d5-9069-4f40-afc8-984315ba76ac" path="/var/lib/kubelet/pods/2a5cf5d5-9069-4f40-afc8-984315ba76ac/volumes" Dec 11 16:27:25 crc kubenswrapper[5120]: E1211 16:27:25.022822 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:27:28 crc kubenswrapper[5120]: E1211 16:27:28.021786 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7" Dec 11 16:27:28 crc kubenswrapper[5120]: I1211 16:27:28.718532 5120 patch_prober.go:28] interesting pod/machine-config-daemon-fpg9g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 16:27:28 crc kubenswrapper[5120]: I1211 16:27:28.718624 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fpg9g" podUID="e868a29f-b837-4513-ad30-f5b6c4354a09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 16:27:38 crc kubenswrapper[5120]: E1211 16:27:38.022841 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-nxhlp" podUID="847525ea-e1cb-43ed-98e3-91baecb73494" Dec 11 16:27:43 crc kubenswrapper[5120]: E1211 16:27:43.022864 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown; artifact err: get manifest: build image source: reading manifest latest in image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index: manifest unknown\"" pod="service-telemetry/infrawatch-operators-vhlbh" podUID="89d28d44-5839-49aa-8893-f6eb0e3c79d7"